Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
166 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ViStripformer: A Token-Efficient Transformer for Versatile Video Restoration (2312.14502v1)

Published 22 Dec 2023 in cs.CV

Abstract: Video restoration is a low-level vision task that seeks to restore clean, sharp videos from quality-degraded frames. One would use the temporal information from adjacent frames to make video restoration successful. Recently, the success of the Transformer has raised awareness in the computer-vision community. However, its self-attention mechanism requires much memory, which is unsuitable for high-resolution vision tasks like video restoration. In this paper, we propose ViStripformer (Video Stripformer), which utilizes spatio-temporal strip attention to catch long-range data correlations, consisting of intra-frame strip attention (Intra-SA) and inter-frame strip attention (Inter-SA) for extracting spatial and temporal information. It decomposes video frames into strip-shaped features in horizontal and vertical directions for Intra-SA and Inter-SA to address degradation patterns with various orientations and magnitudes. Besides, ViStripformer is an effective and efficient transformer architecture with much lower memory usage than the vanilla transformer. Extensive experiments show that the proposed model achieves superior results with fast inference time on video restoration tasks, including video deblurring, demoireing, and deraining.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (59)
  1. Zhong, Z., Gao, Y., Zheng, Y., Zheng, B., Sato, I.: Real-world video deblurring: A benchmark dataset and an efficient recurrent neural network. Int. J. Comput. Vis., 284–301 (2022) Zhang et al. [2022] Zhang, H., Xie, H., Yao, H.: Spatio-temporal deformable attention network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 581–596 (2022) Wang et al. [2022] Wang, Y., Lu, Y., Gao, Y., Wang, L., Zhong, Z., Zheng, Y., Yamashita, A.: Efficient video deblurring guided by motion magnitude. In: Proc. European Conf. Computer Vis., pp. 413–429 (2022) Suin and Rajagopalan [2021] Suin, M., Rajagopalan, A.N.: Gated spatio-temporal attention-guided video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7802–7811 (2021) Isobe et al. [2020] Isobe, T., Li, S., Jia, X., Yuan, S., Slabaugh, G., Xu, C., Li, Y.-L., Wang, S., Tian, Q.: Video super-resolution with temporal group attention. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8808–8017 (2020) Dai et al. [2022] Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhang, H., Xie, H., Yao, H.: Spatio-temporal deformable attention network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 581–596 (2022) Wang et al. [2022] Wang, Y., Lu, Y., Gao, Y., Wang, L., Zhong, Z., Zheng, Y., Yamashita, A.: Efficient video deblurring guided by motion magnitude. In: Proc. European Conf. Computer Vis., pp. 413–429 (2022) Suin and Rajagopalan [2021] Suin, M., Rajagopalan, A.N.: Gated spatio-temporal attention-guided video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7802–7811 (2021) Isobe et al. [2020] Isobe, T., Li, S., Jia, X., Yuan, S., Slabaugh, G., Xu, C., Li, Y.-L., Wang, S., Tian, Q.: Video super-resolution with temporal group attention. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8808–8017 (2020) Dai et al. [2022] Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Y., Lu, Y., Gao, Y., Wang, L., Zhong, Z., Zheng, Y., Yamashita, A.: Efficient video deblurring guided by motion magnitude. In: Proc. European Conf. Computer Vis., pp. 413–429 (2022) Suin and Rajagopalan [2021] Suin, M., Rajagopalan, A.N.: Gated spatio-temporal attention-guided video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7802–7811 (2021) Isobe et al. [2020] Isobe, T., Li, S., Jia, X., Yuan, S., Slabaugh, G., Xu, C., Li, Y.-L., Wang, S., Tian, Q.: Video super-resolution with temporal group attention. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8808–8017 (2020) Dai et al. [2022] Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Rajagopalan, A.N.: Gated spatio-temporal attention-guided video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7802–7811 (2021) Isobe et al. [2020] Isobe, T., Li, S., Jia, X., Yuan, S., Slabaugh, G., Xu, C., Li, Y.-L., Wang, S., Tian, Q.: Video super-resolution with temporal group attention. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8808–8017 (2020) Dai et al. [2022] Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Isobe, T., Li, S., Jia, X., Yuan, S., Slabaugh, G., Xu, C., Li, Y.-L., Wang, S., Tian, Q.: Video super-resolution with temporal group attention. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8808–8017 (2020) Dai et al. [2022] Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  2. Zhang, H., Xie, H., Yao, H.: Spatio-temporal deformable attention network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 581–596 (2022) Wang et al. [2022] Wang, Y., Lu, Y., Gao, Y., Wang, L., Zhong, Z., Zheng, Y., Yamashita, A.: Efficient video deblurring guided by motion magnitude. In: Proc. European Conf. Computer Vis., pp. 413–429 (2022) Suin and Rajagopalan [2021] Suin, M., Rajagopalan, A.N.: Gated spatio-temporal attention-guided video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7802–7811 (2021) Isobe et al. [2020] Isobe, T., Li, S., Jia, X., Yuan, S., Slabaugh, G., Xu, C., Li, Y.-L., Wang, S., Tian, Q.: Video super-resolution with temporal group attention. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8808–8017 (2020) Dai et al. [2022] Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Y., Lu, Y., Gao, Y., Wang, L., Zhong, Z., Zheng, Y., Yamashita, A.: Efficient video deblurring guided by motion magnitude. In: Proc. European Conf. Computer Vis., pp. 413–429 (2022) Suin and Rajagopalan [2021] Suin, M., Rajagopalan, A.N.: Gated spatio-temporal attention-guided video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7802–7811 (2021) Isobe et al. [2020] Isobe, T., Li, S., Jia, X., Yuan, S., Slabaugh, G., Xu, C., Li, Y.-L., Wang, S., Tian, Q.: Video super-resolution with temporal group attention. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8808–8017 (2020) Dai et al. [2022] Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Rajagopalan, A.N.: Gated spatio-temporal attention-guided video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7802–7811 (2021) Isobe et al. [2020] Isobe, T., Li, S., Jia, X., Yuan, S., Slabaugh, G., Xu, C., Li, Y.-L., Wang, S., Tian, Q.: Video super-resolution with temporal group attention. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8808–8017 (2020) Dai et al. [2022] Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Isobe, T., Li, S., Jia, X., Yuan, S., Slabaugh, G., Xu, C., Li, Y.-L., Wang, S., Tian, Q.: Video super-resolution with temporal group attention. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8808–8017 (2020) Dai et al. [2022] Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  3. Wang, Y., Lu, Y., Gao, Y., Wang, L., Zhong, Z., Zheng, Y., Yamashita, A.: Efficient video deblurring guided by motion magnitude. In: Proc. European Conf. Computer Vis., pp. 413–429 (2022) Suin and Rajagopalan [2021] Suin, M., Rajagopalan, A.N.: Gated spatio-temporal attention-guided video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7802–7811 (2021) Isobe et al. [2020] Isobe, T., Li, S., Jia, X., Yuan, S., Slabaugh, G., Xu, C., Li, Y.-L., Wang, S., Tian, Q.: Video super-resolution with temporal group attention. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8808–8017 (2020) Dai et al. [2022] Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Rajagopalan, A.N.: Gated spatio-temporal attention-guided video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7802–7811 (2021) Isobe et al. [2020] Isobe, T., Li, S., Jia, X., Yuan, S., Slabaugh, G., Xu, C., Li, Y.-L., Wang, S., Tian, Q.: Video super-resolution with temporal group attention. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8808–8017 (2020) Dai et al. [2022] Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Isobe, T., Li, S., Jia, X., Yuan, S., Slabaugh, G., Xu, C., Li, Y.-L., Wang, S., Tian, Q.: Video super-resolution with temporal group attention. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8808–8017 (2020) Dai et al. [2022] Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  4. Suin, M., Rajagopalan, A.N.: Gated spatio-temporal attention-guided video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7802–7811 (2021) Isobe et al. [2020] Isobe, T., Li, S., Jia, X., Yuan, S., Slabaugh, G., Xu, C., Li, Y.-L., Wang, S., Tian, Q.: Video super-resolution with temporal group attention. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8808–8017 (2020) Dai et al. [2022] Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Isobe, T., Li, S., Jia, X., Yuan, S., Slabaugh, G., Xu, C., Li, Y.-L., Wang, S., Tian, Q.: Video super-resolution with temporal group attention. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8808–8017 (2020) Dai et al. [2022] Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  5. Isobe, T., Li, S., Jia, X., Yuan, S., Slabaugh, G., Xu, C., Li, Y.-L., Wang, S., Tian, Q.: Video super-resolution with temporal group attention. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8808–8017 (2020) Dai et al. [2022] Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  6. Dai, P., Yu, X., Ma, L., Zhang, B., Li, J., Li, W., Shen, J., Qi, X.: Video demoiréing with relation-based temporal consistency. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17622–17631 (2022) Li et al. [2021] Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  7. Li, D., Xu, C., Zhang, K., Yu, X., Zhong, Y., Ren, W., Suominen, H., Li, H.: Arvo: Learning all-range volumetric correspondence for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7721–7731 (2021) Yu et al. [2022] Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  8. Yu, J., Liu, J., Bo, L., Mei, T.: Memory-augmented non-local attention for video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17834–17843 (2022) Huang et al. [2022] Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  9. Huang, C., Li, J., Li, B., Liu, D., Lu, Y.: Neural compression-based feature learning for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5872–5881 (2022) Wang et al. [2019] Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  10. Wang, X., Chan, K.C.K., Yu, K., Dong, C., Change Loy, C.: Edvr: Video restoration with enhanced deformable convolutional networks. In: Proc. Conf. Comput. Vis. Pattern Recognit Workshops, pp. 1954–1963 (2019) Chao et al. [2022] Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  11. Chao, Z., Hang, D., Jinshan, P., Boyang, L., Yuhao, H., Lean, F., Fei, W.: Deep recurrent neural network with multi-scale bi-directional propagation for video deblurring. In: Proc. AAAI Conf. Artif. Intell., pp. 3598–3607 (2022) Yang et al. [2019] Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  12. Yang, W., Liu, J., Feng, J.: Frame-consistent recurrent video deraining with dual-level flow. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1661–1670 (2019) Yang et al. [2022] Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  13. Yang, W., Tan, R.T., Feng, J., Wang, S., Cheng, B., Liu, J.: Recurrent multi-frame deraining: Combining physics guidance and adversarial learning. IEEE Trans. Pattern Anal.Mach. Intell. 44(11), 8569–8586 (2022) Liu et al. [2018] Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  14. Liu, J., Yang, W., Yang, S., Guo, Z.: Erase or fill? deep joint recurrent rain removal and reconstruction in videos. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3233–3242 (2018) Su et al. [2017] Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  15. Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1279–1288 (2017) Sajjadi et al. [2018] Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  16. Sajjadi, M.S.M., Vemulapalli, R., Brown, M.: Frame-recurrent video super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6626–6634 (2018) Zamir et al. [2021] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  17. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H., Shao, L.: Multi-stage progressive image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14816–14826 (2021) Wang et al. [2022] Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  18. Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., Li, H.: Uformer: A general u-shaped transformer for image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17662–17672 (2022) Zamir et al. [2022] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  19. Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.-H.: Restormer: Efficient transformer for high-resolution image restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5718–5729 (2022) Cho et al. [2021] Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  20. Cho, S.-J., Ji, S.-W., Hong, J.-P., Jung, S.-W., Ko, S.-J.: Rethinking coarse-to-fine approach in single image deblurring. In: Proc. Int. Conf. Comput. Vis., pp. 4641–4650 (2021) Li et al. [2022] Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  21. Li, B., Liu, X., Hu, P., Wu, Z., Lv, J., Peng, X.: All-in-one image restoration for unknown corruption. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 17431–17441 (2022) Chen et al. [2021] Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  22. Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., Gao, W.: Pre-trained image processing transformer. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 12299–12310 (2021) Tsai et al. [2022] Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  23. Tsai, F.-J., Peng, Y.-T., Lin, Y.-Y., Tsai, C.-C., Lin, C.-W.: Stripformer: Strip transformer for fast image deblurring. In: Proc. European Conf. Computer Vis., pp. 146–162 (2022) Yang et al. [2020] Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  24. Yang, F., Yang, H., Fu, J., Lu, H., Guo, B.: Learning texture transformer network for image super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5791–5800 (2020) Chi et al. [2021] Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  25. Chi, Z., Wang, Y., Yu, Y., Tang, J.: Test-time fast adaptation for dynamic scene deblurring via meta-auxiliary learning. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9133–9142 (2021) Suin et al. [2020] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  26. Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3606–3615 (2020) Jiang et al. [2022] Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  27. Jiang, B., Xie, Z., Xia, Z., Li, S., Liu, S.: Erdn: Equivalent receptive field deformable network for video deblurring. In: Proc. European Conf. Computer Vis., pp. 663–678 (2022) Patil et al. [2022] Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  28. Patil, P.W., Gupta, S., Rana, S., Venkatesh, S.: Video restoration framework and its meta-adaptations to data-poor conditions. In: Proc. European Conf. Computer Vis., pp. 143–160 (2022) Kim and Lee [2015] Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  29. Kim, T.H., Lee, K.M.: Generalized video deblurring for dynamic scenes. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 5426–5434 (2015) Zhou et al. [2022] Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  30. Zhou, K., Li, W., Lu, L., Han, X., Lu, J.: Revisiting temporal alignment for video restoration. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 6043–6052 (2022) Vaswani et al. [2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  31. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proc. Neural Inf. Process. Syst., pp. 6000–6010 (2017) Dosovitskiy et al. [2021] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  32. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: Proc. Int. Conf. Learn. Repre. (2021) Liu et al. [2021] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  33. Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proc. Int. Conf. Comput. Vis., pp. 9992–10002 (2021) Chu et al. [2021] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  34. Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Proc. Neural Inf. Process. Syst., pp. 9355–9366 (2021) Chen et al. [2022] Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  35. Chen, C.-F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: Proc. Int. Conf. Learn. Repre. (2022) Carion et al. [2020] Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  36. Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S.: End-to-end object detection with transformers. In: Proc. European Conf. Computer Vis., pp. 213–229 (2020) Ranftl et al. [2021] Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  37. Ranftl, R., Bochkovskiy, A., Koltun, V.: Vision transformers for dense prediction. In: Proc. Int. Conf. Comput. Vis., pp. 12159–12168 (2021) Zhu et al. [2021] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  38. Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: Proc. Int. Conf. Learn. Repre. (2021) Xie et al. [2021] Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  39. Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J.M., Luo, P.: Segformer: Simple and efficient design for semantic segmentation with transformers. In: Proc. Neural Inf. Process. Syst., pp. 12077–12090 (2021) Lin et al. [2022] Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  40. Lin, J., Cai, Y., Hu, X., Wang, H., Yan, Y., Zou, X., Ding, H., Zhang, Y., Timofte, R., Van Gool, L.: Flow-guided sparse transformer for video deblurring. In: Proc. Int. Conf. Mach. Learn., pp. 13334–13343 (2022) Zeng et al. [2020] Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  41. Zeng, Y., Fu, J., Chao, H.: Learning joint spatial-temporal transformations for video inpainting. In: Proc. European Conf. Computer Vis., pp. 528–543 (2020) Wang et al. [2018] Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  42. Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 7794–7803 (2018) Liang et al. [2022a] Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  43. Liang, J., Cao, J., Fan, Y., Zhang, K., Ranjan, R., Li, Y., Timofte, R., Van Gool, L.: Vrt: A video restoration transformer. In: arXiv Preprint arXiv:2201.12288 (2022) Liang et al. [2022b] Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  44. Liang, J., Fan, Y., Xiang, X., Ranjan, R., Ilg, E., Green, S., Cao, J., Zhang, K., Timofte, R., Van Gool, L.: Recurrent video restoration transformer with guided deformable attention. In: Proc. Neural Inf. Process. Syst., pp. 378–393 (2022) Sun et al. [2018] Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  45. Sun, D., Yang, X., Liu, M.-Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8934–8943 (2018) Pan et al. [2023] Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  46. Pan, J., Xu, B., Dong, J., Ge, J., Tang, J.: Deep discriminative spatial and temporal network for efficient video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 22191–22200 (2023) Li et al. [2023] Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  47. Li, D., Shi, X., Zhang, Y., Cheung, K.C., See, S., Wang, X., Qin, H., Li, H.: A simple baseline for video restoration with grouped spatial-temporal shift. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 9822–9832 (2023) Jiang et al. [2018] Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  48. Jiang, T.-X., Huang, T.-Z., Zhao, X.-L., Deng, L.-J., Wang, Y.: Fastderain: A novel video rain streak removal method using directional gradient priors. IEEE Trans. Image Process. (4), 2089–2102 (2018) Wang et al. [2022] Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  49. Wang, S., Zhu, L., Fu, H., Qin, J., SchÖnlieb, C.-B., Feng, W., Wang, S.: Rethinking video rain streak removal: A new synthesis model and a deraining network with video rain prior. In: Proc. European Conf. Computer Vis., pp. 565–582 (2022) Sun et al. [2018] Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  50. Sun, Y., Yu, Y., Wang, W.: Moiré photo restoration using multiresolution convolutional neural networks. IEEE Trans. Image Process. 27(8), 4160–4172 (2018) Zheng et al. [2020] Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  51. Zheng, B., Yuan, S., Slabaugh, G., Leonardis, A.: Image demoireing with learnable bandpass filters. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3633–3642 (2020) He et al. [2019] He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  52. He, B., Wang, C., Shi, B., Duan, L.-Y.: Mop moire patterns using mopnet. In: Proc. Int. Conf. Comput. Vis., pp. 2424–2432 (2019) He et al. [2020] He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  53. He, B., Wang, C., Shi, B., Duan, L.-Y.: Fhde2net: Full high definition demoireing network. In: Proc. European Conf. Computer Vis., pp. 713–729 (2020) Liu et al. [2020] Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  54. Liu, L., Liu, J., Yuan, S., Slabaugh, G., Leonardis, A., Zhou, W., Tian, Q.: Wavelet-based dual-branch network for image demoireing. In: Proc. European Conf. Computer Vis., pp. 86–102 (2020) Yu et al. [2022] Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  55. Yu, X., Dai, P., Li, W., Ma, L., Shen, J., Li, J., Qi, X.: Towards efficient and scale-robust ultra-high-definition image demoireing. In: Proc. European Conf. Computer Vis., pp. 646–662 (2022) Ji and Yao [2022] Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  56. Ji, B., Yao, A.: Multi-scale memory-based video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 1909–1918 (2022) Lai et al. [2017] Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  57. Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 624–632 (2017) Nah et al. [2017] Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  58. Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 3883–3891 (2017) Nah et al. [2019] Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019) Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)
  59. Nah, S., Son, S., Lee, K.M.: Recurrent neural networks with intra-frame iterations for video deblurring. In: Proc. Conf. Comput. Vis. Pattern Recognit., pp. 8102–8111 (2019)

Summary

We haven't generated a summary for this paper yet.