Page 91 - 《软件学报》2021年第10期
P. 91

张维  等:动态手势理解与交互综述                                                               3063


                    4)   对技术的依赖.每种界面使用何种技术进行手势的捕捉与处理,还没有一套统一的标准,在这种情况
                        下,尤其是大样本数的调查缺乏,使得各个工作的结论说服力不强.如果能够真正探明在每种应用场
                        景之下,人群对手势的选择具有怎样的本能反应,会极具指导意义.
                    随着传感器技术和人工智能技术的发展,手势交互界面将会越来越直观和自然,在虚拟现实和增强现实等
                 领域的应用也会更加普及.

                 References:
                 [1]    Wachs JP, Kölsch M, Stern H, Edan Y, Vision-based hand-gesture applications. Communications of the ACM, 2011,54(2):6071.
                 [2]    Xia SH, Gao L, Lai YK, Yuan MZ, Chai JX. A survey on human performance capture and animation. Journal of Computer Science
                     and Technology, 2017,32(3):536554.
                 [3]    Gartner 2017. Gartner’s top 10 strategic technology trends for 2017. https://www.gartner.com/smarterwithgartner/gartners-top-10-
                     technology-trends-2017/
                 [4]    Zhang FJ, Dai GZ, Peng XL. A survey on human-computer interaction in virtual reality. Scientia Sinica Informationis, 2016,46(12):
                     17111736 (in Chinese with English abstract).
                 [5]    Huang J, Han DQ, Chen YN, Tian F, Wang HA, Dai GZ. A survey on human-computer interaction in mixed reality. Journal of
                     Computer-Aided Design & Computer Graphics, 2016,28(6):869880 (in Chinese with English abstract).
                 [6]    Yu HC, Yang XD, Zhang YW, Zhong X, Chen YQ. A review on the recognition of mid-air gestures. Science & Technology Review,
                     2017,35(16):6473 (in Chinese with English abstract).
                 [7]    Guo  XH, Wang J,  Xu  GH.  The  latest progress in the  research of hand function rehabilitation robot.  Chinese Journal of
                     Rehabilitation Medicine, 2017,32(2):235240 (in Chinese with English abstract).
                 [8]    Vuletic T, Duffy A, Hay L, McTeague C, Campbell G, Grealy M. Systematic literature review of hand gestures used in human
                     computer interaction interfaces. Int’l Journal of Human-Computer Studies, 2019,129:7494.
                 [9]    Zhu  YJ, Li CP, Ma WL, Xia  SH, Zhang  TL,  Wang ZQ.  Interaction  feature modeling  of  virtual  object in inmmersive  virtual
                     assembly. Journal of Computer Research and Development, 2011,48(7):12981306 (in Chinese with English abstract).
                [10]    Xu YH, Li  JR. Research and implementation  of  virtual  hand interaction  in  virtual mechanical assembly.  Machinery,  Design &
                     Manufacture, 2014,5:262266 (in Chinese with English abstract).
                [11]    Wu HY, Zhang FJ, Liu YJ, Dai GZ. Research on key issues of vision-based gesture interfaces. Chinese Journal of Computers, 2019,
                     32(10):20302041 (in Chinese with English abstract).
                [12]    Ren P, Zhou MQ, Fan YC, Qian L, Shui WY. A rapid ancient architecture modeling method facing the gesture interaction. Trans.
                     of Beijing Institute of Technology, 2018,38(4):412416, 436 (in Chinese with English abstract).
                [13]    Wang XH, Hua W, Bao HJ. Design and development of a gesture-based interaction system for multi-projector tiled display wall.
                     Journal of Computer-Aided Design & Computer Graphics, 2007,19(3):318322, 328 (in Chinese with English abstract).
                [14]    Weichert F, Bachmann D, Rudak B, Fisseler D. Analysis of the accuracy and robustness of the leap motion controller. Sensors,
                     2013,13(5):63806393.
                [15]    Oikonomidis I, Kyriazis N, Argyros AA. Efficient model-based 3D tracking of hand articulations using Kinect. British Machine
                     Vision Conference (BMVC), 2011,3(1).
                [16]    Romero J, Tzionas D, Black JM. Embodied hands: Modeling and capturing hands and bodies together. In: Proc. of the SIGGRAPH
                     Asia 2017. 2017.
                [17]    Tompson J, Stein M, LeCun Y, Perlin K. Real-time continuous pose recovery of human hands using convolutional networks. ACM
                     Trans. on Graph, 2014,33(5):110.
                [18]    Oberweger M, Lepetit V. Improving fast and accurate 3D hand pose estimation. In: Proc. of the IEEE Int’l Conf. on Computer
                     Vision. 2017. 585594.
                [19]    Oberweger M, Wohlhart P, Lepetit V. Training a feed back loop for hand pose estimation. In: Proc. of the IEEE Int’l Conf. on
                     Computer Vision. 2015. 33163324.
                [20]    Ge LH, Liang H, Yuan JS, Thalmann D. Robust 3D hand pose estimation in single depth images: From single-view CNN to multi-
                     view CNNs. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2016. 35933601.
   86   87   88   89   90   91   92   93   94   95   96