Page 152 - 《软件学报》2024年第4期
P. 152

1730                                                       软件学报  2024 年第 35 卷第 4 期

          [9]    Howard AG, Zhu M, Chen B,  et  al. MobileNets: Efficient convolutional neural networks for mobile vision applications.
             arXiv:1704.04861, 2017.
         [10]    Zhang X, Zhou X, Lin M, et al. ShuffleNet: An extremely efficient convolutional neural network for mobile devices. In: Proc. of
             the IEEE/CVF Conf. on Computer Vision and Pattern Recognition. 2018. 6848−6856.
         [11]    Han K, Wang Y, Tian Q, et al. GhostNet: More features from cheap operations. In: Proc. of the IEEE/CVF Conf. on Computer
             Vision and Pattern Recognition. 2020. 1577−1586.
         [12]    Mehta S, Mohammad R. MobileViT:  Light-weight, general-purpose,  and mobile-friendly  vision transformer. arXiv:2110.02178,
             2021.
         [13]    Lee JY, Park RH. Complex-valued disparity: Unified depth model of depth from stereo, depth from focus, and depth from defocus
             based on the light field gradient. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2021, 43(3): 830−841.
         [14]    Muhammad M, Choi TS. Sampling for shape from focus  in optical microscopy.  IEEE Trans.  on Pattern Analysis and  Machine
             Intelligence, 2012, 34(3): 564−573.
         [15]    Jeon HG, Surh J, Im S, et al. Ring difference filter for fast and noise robust depth from focus. IEEE Trans. on Image Processing,
             2020, 29: 1045−1060.
         [16]    Yan T, Hu Z, Qian YH, et al. 3D shape reconstruction from multifocus image fusion using a multidirectional modified laplacian
             operator. Pattern Recognition, 2020, 98: 107065.
         [17]    Yan T, Wu P, Qian YH, et al. Multiscale fusion and aggregation pcnn for 3D shape recovery. Information Sciences, 2020, 536:
             277−297.
         [18]    Minhas R, Mohammed AA, Wu QM. Shape from focus using fast discrete curvelet transform. Pattern Recognition, 2011, 44(4):
             839−853.
         [19]    Ali U, Muhammad TM. Robust focus volume regularization in shape from focus. IEEE  Trans.  on Image Processing, 2021, 30:
             7215−7227.
         [20]    Moeller M, Benning M, Schnlieb C, et al. Variational depth from focus reconstruction. IEEE Trans. on Image Processing, 2015,
             24(12): 5369−5378.
         [21]    Hu, J, Li S,  Gang S. Squeeze-and-excitation  networks.  In: Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern
             Recognition. 2018. 7132−7141.
         [22]    Daquan Z, Hou Q, Chen Y, et al. Rethinking bottleneck structure for efficient mobile network design. arXiv:2007.02269, 2020.
         [23]    Chollet F. Xception: Deep learning with depthwise separable convolutions. In: Proc. of the IEEE Conf. on Computer Vision and
             Pattern Recognition. 2017. 1800−1807.
         [24]    Zhang T, Qi GJ, Xiao B, et al. Interleaved group convolutions for deep neural networks. arXiv:1707.02725, 2017.
         [25]    Tan M, Le QV. EfficientNet: Rethinking model scaling for convolutional neural networks. arXiv:1905.11946, 2019.
         [26]    Han K, Wang Y, Zhang Q, et al. Model rubik’s cube: Twisting resolution, depth and width for TinyNets. arXiv:2010.14819, 2020.
         [27]    Ma N,  Zhang X, Zheng H,  et al. ShuffleNet V2: Practical guidelines for efficient CNN architecture  design.  In: Proc. of the
             European Conf. on Computer Vision. 2018. 122−138.
         [28]    Andrew H, Mark S, Grace C, et al. Searching for MobileNetV3. In: Proc. of the IEEE/CVF Int’l Conf. on Computer Vision. 2019.
             1314−1324.
         [29]    Chen J, Kao S, He H, et al. Run, don’t walk: Chasing higher FLOPS for faster neural networks. In: Proc. of the IEEE/CVF Conf. on
             Computer Vision and Pattern Recognition. 2023.
         [30]    Pavan K, Vasu A, Gabriel J, et al. MobileOne: An improved one millisecond mobile backbone. In: Proc. of the IEEE/CVF Conf. on
             Computer Vision and Pattern Recognition. 2023.
         [31]    Chen Y, Dai X, Chen D, et al. Mobile-former: Bridging MobileNet and transformer. In: Proc. of the IEEE/CVF Conf. on Computer
             Vision and Pattern Recognition. 2022. 5260−5269.
         [32]    Pentland AP. A new sense for depth of field. IEEE Trans. on Pattern Analysis and Machine Intelligence, 1987, 4: 523−531.
         [33]    Won C, Jeon H. Learning depth from focus in the wild. In: Proc. of the European Conf. on Computer Vision. 2022. 1−18.
         [34]    Krizhevsky A, Sutskever I, Hinton G. ImageNet classification with deep convolutional neural networks. Communications of the
             ACM, 2017, 60(624): 84−90.

         附中文参考文献:
         [3]  闫涛,  钱宇华,  李飞江,  等.  三维时频变换视角的智能微观三维形貌重建方法.  中国科学:  信息科学,  2023,  53:  282−308.  [doi:
             10.1360/SSI-2021-0386]
   147   148   149   150   151   152   153   154   155   156   157