Niil IA

A Niil IA é uma ferramenta inovadora que está revolucionando o processo de recrutamento e seleção. 

É importante ressaltar que a medição das expressões faciais deve ser realizada de forma ética e respeitando a privacidade dos candidatos. Os recrutadores devem ser treinados adequadamente no uso de tecnologias de reconhecimento facial e garantir que a análise das expressões faciais seja apenas um dos vários critérios considerados ao avaliar os candidatos.

Com um avançado algoritmo de inteligência artificial, a Niil IA é capaz de analisar as expressões faciais dos candidatos durante o vídeo currículo, fornecendo insights valiosos sobre suas emoções. Com essa tecnologia, é possível detectar até sete emoções distintas, permitindo que os recrutadores tenham uma compreensão mais completa e precisa do perfil emocional de cada candidato.

A análise das expressões faciais é uma técnica poderosa para compreender as reações emocionais de uma pessoa. A Niil IA utiliza uma combinação de reconhecimento facial para identificar e interpretar 7 emoções: desconforto, satisfação, animosidade, cautela, surpresa, neutralidade e temor.

Com o Niil IA, os processos seletivos tornam-se mais eficientes e eficazes. A ferramenta oferece aos recrutadores uma avaliação objetiva e imparcial das emoções do candidato, ajudando a identificar pessoas com maior adequação à cultura organizacional, capacidade de lidar com situações de estresse e habilidades de comunicação interpessoal, fornecendo aos recrutadores uma visão mais completa e precisa dos candidatos.

Com essa tecnologia, as empresas podem tomar decisões de contratação mais embasadas, resultando em equipes mais alinhadas emocionalmente e, consequentemente, em um ambiente de trabalho mais produtivo e harmonioso.

No entanto, é necessário realizar uma análise conjunta com profissionais da Psicologia, incorporando testes comportamentais inovadores e relacionando-os com o contexto atual do mercado de trabalho.

A Niil IA é uma ferramenta inovadora que traz uma nova dimensão ao processo de recrutamento!

Ao utilizar a Niil IA, nosso cliente terá acesso a um gráfico detalhado que representa as diferentes emoções do candidato. Essa visualização gráfica permitirá uma análise mais aprofundada de cada uma dessas emoções. Com essa ferramenta, será possível identificar e compreender melhor as nuances emocionais do candidato, auxiliando na tomada de decisões mais informadas. O gráfico proporciona uma visão clara e objetiva das emoções expressas, tornando o processo de análise mais preciso e eficiente. Com essas informações em mãos, nosso cliente poderá avaliar de forma mais completa e assertiva cada um dos candidatos.

Base científica da nossa Inteligência Artificial Niil IA:

- Darwin, C. "The Expression of the Emotions in Man and Animals." (1886).

- Ekman, P. "Facial expression and emotion." Am. Psychol. 48, 384 (1993).

- Ekman, P. & Friesen, W. "Facial action coding system: a technique for the measurement of facial movement." Palo Alto: Consulting Psychologists (1978).

- Cohn, J. F., Ambadar, Z. & Ekman, P. "Observer-based measurement of facial expression with the Facial Action Coding System." The handbook of emotion elicitation and assessment, 203–221 (2007).

- Kilbride, J. E. & Yarczower, M. "Ethnic bias in the recognition of facial expressions." J. Nonverbal Behav. 8, 27–41 (1983).

- Graesser, A. C. et al. "Detection of emotions during learning with AutoTutor." Proceedings of the 28th annual meetings of the cognitive science society, 285–290 (Citeseer, 2006).

- Fridlund, A. J., Schwartz, G. E. & Fowler, S. C. "Pattern recognition of self-reported emotional state from multiple-site facial EMG activity during affective imagery." Psychophysiology 21, 622–637 (1984).

- Larsen, J. T., Norris, C. J. & Cacioppo, J. T. "Effects of positive and negative affect on electromyographic activity over zygomaticus major and corrugator supercilii." Psychophysiology 40, 776–785 (2003).

- Sayette, M. A. et al. "Alcohol and group formation: a multimodal investigation of the effects of alcohol on emotion and social bonding." Psychol. Sci. 23, 869–878 (2012).

- Navarathna, R. et al. "Predicting movie ratings from audience behaviors." IEEE Winter Conference on Applications of Computer Vision, 1058–1065 (2014).

- Golland, Y., Mevorach, D. & Levit-Binnun, N. "Affiliative zygomatic synchrony in co-present strangers." Scientific Reports vol. 9 (2019).

- Cheong, J. H., Brooks, S. & Chang, L. J. "FaceSync: Open source framework for recording facial expressions with head-mounted cameras." F1000Res. (2019).

- Cheong, J. H., Molani, Z., Sadhukha, S. & Chang, L. J. "Synchronized affect in shared experiences strengthens social connection." (2020) doi:10.31234/osf.io/bd9wn.

- De la Torre, F. et al. "IntraFace." IEEE Int Conf Autom Face Gesture Recognit Workshops 1, (2015).

- Vemulapalli, R. & Agarwala, A. "A compact embedding for facial expression similarity." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5683–5692 (openaccess.thecvf.com, 2019).

- Stöckli, S., Schulte-Mecklenbeck, M., Borer, S. & Samson, A. C. "Facial expression analysis with AFFDEX and FACET: A validation study." Behav. Res. Methods 50, 1446–1460 (2018).

- Haines, N., Southward, M. W., Cheavens, J. S., Beauchaine, T. & Ahn, W.-Y. "Using computer-vision and machine learning to automate facial coding of positive and negative affect intensity." PLoS One 14, e0211735 (2019).

- Höfling, T. T. A., Gerdes, A. B. M., Föhl, U. & Alpers, G. W. "Read My Face: Automatic Facial Coding Versus Psychophysiological Indicators of Emotional Valence and Arousal." Front. Psychol. 11, 1388 (2020).

- Dupré, D., Krumhuber, E. G., Küster, D. & McKeown, G. J. "A performance comparison of eight commercially available automatic classifiers for facial affect recognition." PLoS One 15, e0231968 (2020).

- Werner, P. et al. "Automatic Pain Assessment with Facial Activity Descriptors." IEEE Transactions on Affective Computing 8, 286–299 (2017).

- Chen, P.-H. A. et al. "Socially transmitted placebo effects." Nat Hum Behav 3, 1295–1305 (2019).

- Littlewort, G. C., Bartlett, M. S. & Lee, K. Automatic coding of facial expressions displayed during posed and genuine pain. Image Vis. Comput. 27, 1797–1803 (2009).
- Wang, Y. et al. Automatic Depression Detection via Facial Expressions Using Multiple Instance Learning. in 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI) 1933–1936 (2020).

- Penton-Voak, I. S., Pound, N., Little, A. C. & Perrett, D. I. Personality Judgments from Natural and Composite Facial Images: More Evidence For A ‘Kernel Of Truth’ In Social Perception. Soc. Cogn. 24, 607–640 (2006).
- Segalin, C. et al. What your Facebook Profile Picture Reveals about your Personality. Proceedings of the 25th ACM international conference on Multimedia (2017) doi:10.1145/3123266.3123331.
- Kachur, A., Osin, E., Davydov, D., Shutilov, K. & Novokshonov, A. Assessing the Big Five personality traits using real-life static facial images. Sci. Rep. 10, 8487 (2020).
- Kosinski, M. Facial recognition technology can expose political orientation from naturalistic facial images. Sci. Rep. 11, 100 (2021).
- Kanade, T., Cohn, J. F. & Yingli Tian. Comprehensive database for facial expression analysis. in Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580) 46–53 (2000).
- Lucey, P. et al. The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops (2010) doi:10.1109/cvprw.2010.5543262.
- Mavadati, S. M., Mahoor, M. H., Bartlett, K., Trinh, P. & Cohn, J. F. DISFA: A Spontaneous Facial Action Intensity Database. IEEE Transactions on Affective Computing 4, 151–160 (2013).
- Zhang, X. et al. BP4D-Spontaneous: a high-resolution spontaneous 3D dynamic facial expression database. Image Vis. Comput. 32, 692–706 (2014).
- Mavadati, M., Sanger, P. & Mahoor, M. H. Extended disfa dataset: Investigating posed and spontaneous facial expressions. in proceedings of the IEEE conference on computer vision and pattern recognition workshops 1–8 (2016).
- Zhang, Z. et al. Multimodal spontaneous emotion corpus for human behavior analysis. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 3438–3446 (2016).
- Krumhuber, E. G., Skora, L., Küster, D. & Fou, L. A Review of Dynamic Datasets for Facial Expression Research. Emot. Rev. 9, 280–292 (2017).
- Dhall, A., Goecke, R., Joshi, J., Sikka, K. & Gedeon, T. Emotion Recognition In The Wild Challenge 2014: Baseline, Data and Protocol. in Proceedings of the 16th International Conference on Multimodal Interaction 461–466 (Association for Computing Machinery, 2014).
- Qi, P., Zhang, Y., Zhang, Y., Bolton, J. & Manning, C. D. Stanza: A Python Natural Language Processing Toolkit for Many Human Languages. arXiv [cs.CL] (2020).
- Brockman, G. et al. OpenAI Gym. arXiv [cs.LG] (2016).
- iMotions Biometric Research Platform 6.0. (iMotions A/S, Copenhagen, Denmark, 2016).
- Van Kuilenburg, H., Den Uyl, M. J., Israël, M. L. & Ivan, P. Advances in face and gesture analysis. Measuring Behavior 2008 371, (2008).
- Yitzhak, N. et al. Gently does it: Humans outperform a software classifier in recognizing subtle, nonstereotypical facial expressions. Emotion 17, 1187–1198 (2017).

- Krumhuber, E. G., Küster, D., Namba, S. & Skora, L. Human and machine validation of 14 databases of dynamic facial expressions. Behav. Res. Methods (2020) doi:10.3758/s13428-020-01443-y.
- Krumhuber, E. G., Küster, D., Namba, S., Shah, D. & Calvo, M. G. Emotion recognition from posed and spontaneous dynamic expressions: Human observers versus machine analysis. Emotion 21, 447–451 (2021).
- Littlewort, G. et al. The computer expression recognition toolbox (CERT). in 2011 IEEE International Conference on Automatic Face Gesture Recognition (FG) 298–305 (ieeexplore.ieee.org, 2011).
- McDuff, D. et al. AFFDEX SDK: A Cross-Platform Real-Time Multi-Face Expression Recognition Toolkit. in Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems 3723–3726 (Association for Computing Machinery, 2016).
- Baltrusaitis, T., Zadeh, A., Lim, Y. C. & Morency, L. OpenFace 2.0: Facial Behavior Analysis Toolkit. in 2018 13th IEEE International Conference on Automatic Face Gesture Recognition (FG 2018) 59–66 (2018).
- Jenkinson, M., Beckmann, C. F., Behrens, T. E. J., Woolrich, M. W. & Smith, S. M. FSL. Neuroimage 62, 782–790 (2012).
- Cox, R. W. AFNI: software for analysis and visualization of functional magnetic resonance neuroimages. Comput. Biomed. Res. 29, 162–173 (1996).
- Friston, K. J., Frith, C. D., Liddle, P. F. & Frackowiak, R. S. Comparing functional (PET) images: the assessment of significant change. J. Cereb. Blood Flow Metab. 11, 690–699 (1991).
- Abraham, A. et al. Machine learning for neuroimaging with scikit-learn. Front. Neuroinform. 8, 14 (2014).
- Ekman, P. & Rosenberg, E. L. What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). (Oxford University Press, 1997).
- Paszke, A. et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. arXiv [cs.LG] (2019).
- Pedregosa, F. et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011).
- Zhang, S. et al. FaceBoxes: A CPU real-time face detector with high accuracy. in 2017 IEEE International Joint Conference on Biometrics (IJCB) 1–9 (2017).
- Zhang, L. et al. Multi-Task Cascaded Convolutional Networks Based Intelligent Fruit Detection for Designing Automated Robot. IEEE Access 7, 56028–56038 (2019).
- Zhang, N., Luo, J. & Gao, W. Research on Face Detection Technology Based on MTCNN. in 2020 International Conference on Computer Network, Electronic and Automation (ICCNEA) 154–158 (2020).
- Deng, J. et al. RetinaFace: Single-stage Dense Face Localisation in the Wild. arXiv [cs.CV] (2019).
- Yang, S., Luo, P., Loy, C.-C. & Tang, X. Wider face: A face detection benchmark. in Proceedings of the IEEE conference on computer vision and pattern recognition 5525–5533 (2016).
- Shen, J. et al. The first facial landmark tracking in-the-wild challenge: Benchmark and results. in Proceedings of the IEEE international conference on computer vision workshops 50–58 (2015).
- Sagonas, C., Antonakos, E., Tzimiropoulos, G., Zafeiriou, S. & Pantic, M. 300 Faces In-The-Wild Challenge: database and results. Image Vis. Comput. 47, 3–18 (2016).

- Guo, X. et al. PFLD: A Practical Facial Landmark Detector. arXiv [cs.CV] (2019).
- Howard, A. G. et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv [cs.CV] (2017).
- Chen, S., Liu, Y., Gao, X. & Han, Z. MobileFaceNets: Efficient CNNs for Accurate Real-Time Face Verification on Mobile Devices. arXiv [cs.CV] (2018).

- Sagonas, C., Tzimiropoulos, G., Zafeiriou, S. & Pantic, M. A semi-automatic methodology for facial landmark annotation. in Proceedings of the IEEE conference on computer vision and pattern recognition workshops 896–903 (2013).
- Hyniewska, S., Sato, W., Kaiser, S. & Pelachaud, C. Naturalistic Emotion Decoding From Facial Action Sets. Front. Psychol. 9, 2678 (2018).
- Grafsgaard, J. F., Boyer, K. E. & Lester, J. C. Predicting Facial Indicators of Confusion with Hidden Markov Models. in Affective Computing and Intelligent Interaction 97–106 (Springer Berlin Heidelberg, 2011).
- Julle-Danière, E. et al. Are there non-verbal signals of guilt? PLoS One 15, e0231756 (2020).
- Shao, Z., Liu, Z., Cai, J. & Ma, L. Deep adaptive attention for joint facial action unit detection and face alignment. in Proceedings of the European Conference on Computer Vision (ECCV) 705–720 (2018).
- Shao, Z., Liu, Z., Cai, J. & Ma, L. JÂA-net: Joint facial action unit detection and face alignment via adaptive attention. Int. J. Comput. Vis. 129, 321–340 (2021).
- Li et al. An EEG-Based Multi-Modal Emotion Database with Both Posed and Authentic Facial Actions for Emotion Analysis. in 2020 15th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2020) (FG) vol. 0 336–343 (2020).
- Lucey, P., Cohn, J. F., Prkachin, K. M., Solomon, P. E. & Matthews, I. Painful data: The UNBC-McMaster shoulder pain expression archive database. in 2011 IEEE International Conference on Automatic Face Gesture Recognition (FG) 57–64 (2011).
- Kollias, D., Nicolaou, M. A., Kotsia, I., Zhao, G. & Zafeiriou, S. Recognition of affect in the wild using deep neural networks. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops 26–33 (2017).
- Zafeiriou, S. et al. Aff-Wild: Valence and Arousal ‘In-the-Wild’ Challenge. 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (2017).
- Kollias, D. et al. Deep Affect Prediction in-the-Wild: Aff-Wild Database and Challenge, Deep Architectures, and Beyond. Int. J. Comput. Vis. 127, 907–929 (2019).
- Kollias, D. & Zafeiriou, S. Expression, Affect, Action Unit Recognition: Aff-Wild2, Multi-Task Learning and ArcFace. arXiv [cs.CV] (2019).
- Kollias, D., Sharmanska, V. & Zafeiriou, S. Face Behavior a la carte: Expressions, Affect and Action Units in a Single Network. arXiv [cs.CV] (2019).
- Kollias, D., Schulc, A., Hajiyev, E. & Zafeiriou, S. Analysing Affective Behavior in the First ABAW 2020 Competition. arXiv [cs.LG] (2020).
- Baltrušaitis, T., Mahmoud, M. & Robinson, P. Cross-dataset learning and person-specific normalisation for automatic Action Unit detection. in 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG) vol. 06 1–6 (2015).
- Dalal, N. & Triggs, B. Histograms of oriented gradients for human detection. in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) vol. 1 886–893 vol. 1 (2005).
- van der Walt, S. et al. scikit-image: image processing in Python. PeerJ 2, e453 (2014).
- Pedregosa, F. et al. Scikit-learn: Machine learning in Python. the Journal of machine Learning research 12, 2825–2830 (2011).
- Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M. & Pollak, S. D. Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements.

Psychological Science in the Public Interest vol. 20 1–68 (2019).
- Haines, N. et al. Regret Induces Rapid Learning from Experience-based Decisions: A Model-based Facial Expression Analysis Approach. bioRxiv 560011 (2019) doi:10.1101/560011.
- Luan, P., Huynh, V. & Tuan Anh, T. Facial Expression Recognition using Residual Masking Network. in IEEE 25th International Conference on Pattern Recognition 4513–4519 (2020).
- Goodfellow, I. J. et al. Challenges in representation learning: A report on three machine learning contests. Neural Networks vol. 64 59–63 (2015).
- Zhang, Z., Luo, P., Loy, C. C. & Tang, X. From facial expression recognition to interpersonal relation prediction. Int. J. Comput. Vis. 126, 550–569 (2018).
- Lyons, M., Kamachi, M. & Gyoba, J. The Japanese Female Facial Expression (JAFFE) Dataset. (1998). doi:10.5281/zenodo.3451524.
- Mollahosseini, A., Hasani, B. & Mahoor, M. H. AffectNet: A database for facial expression, valence, and arousal computing in the wild. IEEE Trans. Affect. Comput. 10, 18–31 (2019).
- McKinney, W. & Others. pandas: a foundational Python library for data analysis and statistics. Python for High Performance and Scientific Computing 14, 1–9 (2011).
- Afzal, S. & Robinson, P. Natural affect data — Collection & annotation in a learning context. 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops (2009) doi:10.1109/acii.2009.5349537.
- Jones, E., Oliphant, T., Peterson, P. & Others. SciPy: Open source scientific tools for Python. (2001).
- Chang, L. et al. cosanlab/nltools: 0.3.14. (Zenodo, 2019). doi:10.5281/ZENODO.2229812.
- Fabian Benitez-Quiroz, C., Srinivasan, R. & Martinez, A. M. Emotionet: An accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild. in Proceedings of the IEEE conference on computer vision and pattern recognition 5562–5570 (2016).
- Zhu, Y., Cai, H., Zhang, S., Wang, C. & Xiong, Y. TinaFace: Strong but Simple Baseline for Face Detection. arXiv [cs.CV] (2020).
- Jourabloo, A., Ye, M., Liu, X. & Ren, L. Pose-invariant face alignment with a single cnn. in Proceedings of the IEEE International Conference on computer vision 3200–3209 (2017).
- Zhu, X., Lei, Z., Liu, X., Shi, H. & Li, S. Z. Face alignment across large poses: A 3d solution. in Proceedings of the IEEE conference on computer vision and pattern recognition 146–155 (2016).
- Valle, R., Buenaposada, J. M., Valdés, A. & Baumela, L. Face alignment using a 3D deeply-initialized ensemble of regression trees. Computer Vision and Image Understanding vol. 189 102846 (2019).
- Valle, R., Buenaposada, J. M., Valdes, A. & Baumela, L. A deeply-initialized coarse-to-fine ensemble of regression trees for face alignment. in Proceedings of the European Conference on Computer Vision (ECCV) 585–601 (2018).
- Watson, D. M., Brown, B. B. & Johnston, A. A data-driven characterisation of natural facial expressions when giving good and bad news. PLoS Comput. Biol. 16, e1008335 (2020).
- Chang, L. J. et al. Endogenous variation in ventromedial prefrontal cortex state dynamics during naturalistic viewing reflects affective experience. bioRxiv (2018).
- Jack, R. E., Garrod, O. G. B. & Schyns, P. G. Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time. Curr. Biol. 24, 187–192 (2014).
- Rhue, L. Racial Influence on Automated Perceptions of Emotions. (2018) doi:10.2139/ssrn.3281765.
- Nagpal, S., Singh, M., Singh, R. & Vatsa, M. "Deep Learning for Face Recognition: Pride or Prejudiced?" arXiv [cs.CV] (2019).

- McDuff, D., Gontarek, S. & Picard, R. "Remote Measurement of Cognitive Stress via Heart Rate Variability." Conf. Proc. IEEE Eng. Med. Biol. Soc. 2014, 2957–2960 (2014).

Envie sua mensagem!
Nossos consultores especialistas estão prontos para responder todas suas dúvidas.
Enviar mensagem
Caso prefira, deixe seus dados no formulário abaixo que entraremos em contato
Recebemos sua solicitação!
Nosso time de especialistas irá retornar o contato em breve!
Algo deu errado! Por favor, tente novamente