The Relationship Between Human-computer Interaction And Artificial Intelligence

The Relationship Between Human-computer Interaction And Artificial Intelligence

Human-computer interaction is a way for humans and machines to interact, and it is also a carrier platform for technology to play a role. The machine receives operating signals input by humans in multiple dimensions, recognizes user intentions and emotions, increasingly dilutes the integration boundary between humans and machines, improves the depth of machine services for humans, and constantly pursues the integration of humans and machines, making machines a human function The extended endgame.

"Man-machine symbiosis" is the end game

In 1960, Joseph Liclyde proposed the idea of "human-machine symbiosis", and through the US National Science and Technology Program, he strongly supported the research project of "graphics and visualization under the concept of human-computer symbiosis, virtual object manipulation, Internet, etc.", personal computers, The iconic key technologies of the Internet have been born one after another.

His advanced human-computer interaction concept has guided the research and development of graphics technology, promoted the integration of linguistics, psychology, and computer science to give birth to computer graphical user interfaces, created new industries such as personal computers and the Internet, and made great contributions to the development of modern human-computer interaction. contribute.

In the future, we will have a visual human-computer interaction interface, equipped with new voice, vision, brain-computer and other AI sensing technologies, access to multi-dimensional interactive portals such as hearing, vision, and sensation, and enter a more natural three-dimensional interactive era. "Man-machine symbiosis" is one step closer.

"Multi-sensory access" expands interactive scenes and precision

A person is a collection of "sensors", the touch of the skin, the vision of the eyes, the smell of the nose, the hearing of the ears...Multi-channel human-computer interaction provides more information input portals and ensures the recognition ability and accuracy of more scenes. For example, the non-motion input of voice recognition, the static input of face recognition, and the motion input of gesture control make the machine's ability to "understand" the intention more and more.

At the same time, multi-channel input is also an urgent need for special groups of people. People with disabilities can use keyboards, voice, gestures, expressions, lip movements, etc., or brain currents, such as sign language recognition based on multi-channel or multi-modal perception theory. Serving the daily communication of the deaf-mute and handicapped.

In the future, the end result of human-computer interaction is to hope that human-computer interaction is the same as human-human interaction, and multi-channel input becomes an effective solution for the realization of "human-computer symbiosis": on the one hand, knowledge perception is carried out so that the machine can learn the current state of humans, and then Carry out the next action; on the other hand, use wearable devices to infer people's psychological and emotional situations, and then complete the purpose of more "personalized" interaction.

Human-computer interaction and artificial intelligence symbiosis

The human-computer interaction module provides input and output channels, and the artificial intelligence module provides a calculation and inference center. They complement each other and coexist, from imitating humans to becoming humans, until the end of "human-computer symbiosis".

In the first generation of human-computer interaction, one-dimensional keyboards and mice belong to deterministic interactions. Humans adapt to interactions, leading to a series of habitual diseases, such as mouse hands;

The second generation of human-computer interaction, the two-dimensional touch module is an analog interaction, and the machine pre-determines the human interaction range and cannot provide a more realistic perception;

The third generation of human-computer interaction, three-dimensional AI human-computer interaction belongs to understanding and inferential interaction. Machines use big data and calculations to infer the intentions of people and enter the channel of "human-computer symbiosis".

A human body has a set of automatic operation mechanism-"interaction model", and the machine can also establish a set of "interaction model" that simulates the human body, the human-computer interaction module inputs information, provides data for the artificial intelligence module big data, and further improves the calculation reference , To ensure the accuracy of human-computer interaction output information, along the framework of this mechanism, the machine better adapts to human actions, intentions, and emotions, and jointly achieves a "human-machine symbiosis" goal.

In the future, hardware input materials, forms, and solutions for human-computer interaction will be upgraded, artificial intelligence algorithms, speed, and architecture will be upgraded, with a greater range of interaction and precision levels to explore sensory experiences, and even create a fantasy world, enter a super-cognitive world, and lead people Enter a world that the ordinary physical senses cannot perceive.

Hinterlasse einen Kommentar

* Benötigte Felder

Bitte beachten Sie: Kommentare müssen vor der Veröffentlichung genehmigt werden.