Page 94 - 《软件学报》2021年第10期
P. 94

3066                                 Journal of Software  软件学报 Vol.32, No.10, October 2021

                [65]    Escalera S, Gonzàlez J, Baró X, Reyes M, Lopes O, Guyon I, Athistos V, Escalante HJ. Multi-modal gesture recognition challenge
                     2013: Dataset and results. In: Proc. of the ACM Int’l Conf. on Multimodal Interaction. 2013.
                [66]    Triesch J, Christoph M. Robust classification of hand postures against complex backgrounds. In: Proc. of the 2nd Int’l Conf. on
                     Automatic Face and Gesture Recognition. 1996. 170175.
                [67]    Triesch J, Christoph M. A system for person-independent hand posture recognition against complex backgrounds. IEEE Trans. on
                     Pattern Analysis and Machine Intelligence, 2001,23(12):14491453.
                [68]    Marcel S, Bernier O. Hand posture recognition in a body-face centered space. In: Proc. of the Conf. on Human Factors in Computer
                     Systems (CHI). 1999.
                [69]    Marcel S, Bernier O, Viallet JE, Collobert D. Hand gesture recognition using input/ouput hidden markov models. In: Proc. of the
                     4th Int’l Conf. on Automatic Face and Gesture Recognition (AFGR). 2000.
                [70]    Smedt  QD,  Wannous H, Vandeborre  JP. Skeleton-based dynamic hand  gesture recognition. In: Proc. of the  IEEE  Conf. on
                     Computer Vision and Pattern Recognition Workshops (CVPRW). 2016.
                [71]    Smedt QD, Wannous H, Vandeborre JP, Guerry J, Bertrand LS, Filliat D. SHREC 2017 track: 3D hand gesture recognition using a
                     depth and skeletal dataset. In: Proc. of the 10th Eurographics Workshop on 3D Object Retrieval. 2017.
                [72]    Zhang YF, Cao CQ, Cheng J, Lu HQ. EgoGesture: A new dataset and benchmark for egocentric hand gesture recognition. IEEE
                     Trans. on Multimedia (T-MM), 2018,20(5):10381050.
                [73]    Garcia-Hernando  G,  Yuan SX, Baek SR,  Kim  TK. First-person hand  action benchmark  with  RGB-D videos  and 3D hand pose
                     annotations. In: Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2017.
                [74]    Bullock IM, Feix T, Dollar AM. The Yale human grasping dataset: Grasp, object, and task data in household and machine shop
                     environments. The Int’l Journal of Robotics Research, 2015,34(3):251255.
                [75]    Zhang YP, Han T, Ren ZM, Umetani N, Tong X, Liu Y, Shiratori T, Cao X. BodyAvatar: Creating freeform 3d avatars using first-
                     person body gestures. In: Proc. of the 26th Annual ACM Symp. on User Interface Software and Technology. 2013. 387396.
                [76]    Pfeil KP, Koh SL, LaViola JJ. Exploring 3D gesture metaphors for interaction with unmanned aerial vehicles. In: Proc. of the 2013
                     Int’l Conf. on Intelligent User Interfaces. 2013. 257266.
                [77]    Vinayak  K, Ramani K. Extracting  hand  grasp and motion  for  intent expression in mid-air  shape  deformation: A concrete and
                     iterative exploration through a virtual pottery application. Computers & Graphics, 2016,55:143156.
                [78]    Hilliges O, Kim D, Izadi S, Weiss M, Wilson DA. HoloDesk: Direct 3D interactions with a situated see-through display. In: Proc.
                     of the SIGCHI Conf. on Human Factors in Computing Systems. 2012. 24212430.
                [79]    Colaço A, Kirmani A, Yang HS. Mime: Compact, low-power 3D gesture sensing for interaction with head-mounted displays. In:
                     Proc. of the 26th Annual ACM Symp. on User Interface Software and Technology. 2013. 227236.
                [80]    Liang H, Wang J, Sun Q, Liu YJ, Yuan JS, Luo J, He Y. Barehanded music: Real-time hand interaction for virtual piano. In: Proc.
                     of the 20th Acm Siggraph Symp. on Interactive 3D Graphics and Games. 2016. 8794.
                [81]    Kim,  YK, Bae SH. SketchingWithHands: 3D sketching handheld products  with first-person hand posture. In: Proc. of the 29th
                     Annual Symp. on User Interface Software and Technology. ACM, 2016. 797808.
                [82]    Yi X, Yu C, Zhang MR, Gao SD, Sun K, Shi YC. ATK: Enabling ten-finger freehand typing in air based on 3d hand tracking data.
                     In: Proc. of the 28th Annual ACM Symp. on User Interface Software. 2015. 539548.
                [83]    Cui J, Fellner DW, Kuijper A, Sourin A. Mid-air Gestures for Virtual Modeling with Leap Motion. Springer Int’l Publishing, 2016.
                [84]    Liang H, Chang J, Kazmi IK, Zhang JJ, Jiao PF. Hand gesture-based interactive puppetry system to assist storytelling for children.
                     The Visual Computer, 2016,33(4):517531.
                [85]    Hatscher B, Luz M, Nacke LE, Elkmann N, Müller V, Hansen C. GazeTap: Towards hands-free interaction in the operating room.
                     In: Proc. of the 19th ACM Int’l Conf. on Multimodal Interaction. 2017. 243251.
                [86]    Sun SQ, Zhang LS. Three-dimension sketch design oriented to product innovation. Computer Integrated Manufacturing Systems,
                     2007,13(2):224227, 274 (in Chinese with English abstract).
                [87]    Shen JC, Luo YL, Wu ZK, Tian Y, Deng QQ. CUDA-based real-time hand gesture interaction and visualization for CT volume
                     dataset using leap motion. The Visual Computer, 2016,32(3):359370.
   89   90   91   92   93   94   95   96   97   98   99