Method of synthetic data generation and architecture of face recognition system for interaction with robots in cyberphysical space

Method of synthetic data generation and architecture of face recognition system for interaction with robots in cyberphysical space

Dmitrij A. Malov
Saint-Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences (SPIIRAS), Laboratory of Autonomous Robotic Systems, Junior Research Scientist, 39, 14 line V.O., Saint-Petersburg, 199178, Russia, tel.: +7(931)358-83-78, This email address is being protected from spambots. You need JavaScript enabled to view it.

Maksim A. Letenkov
Saint-Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences (SPIIRAS), Laboratory of Big Data of Socio-Cyberphysical Systems, Junior Research Scientist, 39, 14 line V.O., Saint-Petersburg, 199178, Russia


Received 21 April 2019

Abstract
In this paper we discuss the problem of user identification in cyberphysical space based on images of user’s face. We have analysed existing methods and approaches of facial recognition. Taking into account this analysis, we propose a new method for generating synthetic samples, which allows you to create datasets for neural network training. Also, we designed architecture of the system itself which provides retraining of the model and selecting the optimal configuration of the neural network. The paper considers scenarios of user interaction with mobile robotic systems within the cyberphysical environment based on the developed face recognition system.

Key words
Cyberphysical space, facial recognition, autonomous robotic systems, synthetic data, artificial neural networks.

Acknowledgements
This research is supported by the Council for Grants of the President of the Russian Federation (project No. MK-383.2018.9).

DOI

https://doi.org/10.31776/RTCJ.7203 

Bibliographic description
Malov, D. and Letenkov, M. (2019). Method of synthetic data generation and architecture of face recognition system for interaction with robots in cyberphysical space. Robotics and Technical Cybernetics, 7(2), pp.100-108.

UDC identifier:
004.032.26

References

  1. Goodfellow, I. et al. (2014). Generative adversarial nets. In: Advances in neural information processing systems. pp.2672-2680.
  2. Shrivastava, A. et al. (2017). Learning from simulated and unsupervised images through adversarial training. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp.2107-2116.
  3. Wang, X. and Gupta, A. (2016). Generative image modeling using style and structure adversarial networks. In: European Conference on Computer Vision. pp.318-335.
  4. Zhu, J. et al. (2016). Generative visual manipulation on the natural image manifold. In: European Conference on Computer Vision. pp.597-613.
  5. Liu, M. and Tuzel, O. (2016). Coupled generative adversarial networks. In: Advances in neural information processing systems. pp.469-477.
  6. Tuzel, O., Taguchi, Y. and Hershey, J. (2016). Global-Local Face Upsampling Network. [online] arXiv preprint. Available at: https://arxiv.org/abs/1603.07235v1 [Accessed 21 Apr. 2019].
  7. Guan, S. (2018). Generating custom photo-realistic faces using AI. [online] Insight Fellows Program. Available at: https://blog.insightdatascience.com/generating-custom-photo-realistic-faces-using-ai-d170b1b59255 [Accessed 10 Feb. 2018].
  8. Kim, T., Cha, M., Kim, H., Lee, J. and Kim, J. (2017). Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. [online] arXiv preprint. Available at: https://arxiv.org/abs/1703.05192 [Accessed 21 Apr. 2019].
  9. Goodfellow, I., Bengio, Y. and Courville, A. (2016). Deep learning. Cambridge (EE. UU.): MIT Press.
  10. GitHub. (2018). Modern GAN implementations. [online] Available at: https://github.com/eriklindernoren/Keras-GAN/blob/master/dcgan/dcgan.py [Accessed 10 Feb. 2018].
  11. Larsen, A. and Sønderby, S. (2015). Torch-library implementation of GAN approach | Generating Faces with Torch. [online] Torch.ch. Available at: http://torch.ch/blog/2015/11/13/gan.html [Accessed 10 Feb. 2018].
Editorial office address: 21, Tikhoretsky pr., Saint-Petersburg, Russia, 194064, tel.: +7(812) 552-13-25 e-mail: zheleznyakov@rtc.ru