Friday, November 22, 2024

WiMi Developed an Assistive Robotic Technology and a Control Approach Based on Hybrid BCI

Related stories

Deep Instinct Expands Zero-Day Security to Amazon S3

Deep Instinct, the zero-day data security company built on...

Foxit Unveils AI Assistant in Admin Console

Foxit, a leading provider of innovative PDF and eSignature...

Instabase Names Junie Dinda CMO

Instabase, a leading applied artificial intelligence (AI) solution for...
spot_imgspot_img

WiMi Hologram Cloud Inc., a leading global Hologram Augmented Reality Technology provider, announced that it developed an assistive robotic technology and a control approach based on hybrid BCI. The technology combines various technological components such as an eye-tracker, a device for recording EEG signals, a webcam, and a robotic arm to enable the user to accurately control the movement of the robotic arm by means of hybrid gaze BCI.

This assistive robot control technology will enable users to control the movement of the robotic arm end-effector through hybrid BCI, enabling more precise and flexible manipulation. The technology has been developed to improve the robot’s grasping performance, with a focus on improving its reaching performance so that grasping tasks can be automated. To achieve this goal, the development team divided the task into three key phases and leveraged the natural human visual motor coordination behavior.

First, the user specifies the target location of the assisting robot with a hybrid BCI in discrete selection mode. By observing the virtual rectangle appearing around the target, the user confirms that the target position has been successfully communicated to the assistive robot. Subsequently, it automatically switches to the continuous velocity control mode and enters the second phase. The user uses the hybrid BCI to move the robotic arm end-effector sequentially while avoiding collisions with obstacles. Once the end-effector enters a pre-specified area directly above the target object, it automatically stops and hovers over the target. Finally, the pre-programmed program is executed. The end-effector moves downward, adjusts the gripper orientation according to the direction of the target in the workspace, and successfully grabs the object. This design effectively reduces the number of degrees of freedom and allows the user to reach the object in three dimensions.

Also Read: Soft Robotics Demonstrates mGripAI™ at PackExpo 2023

One of the key points of the technology is the application of hybrid BCI. This technology combines vision tracking and brain-computer interface technologies to enable control of the robot through discrete selection mode and continuous velocity control mode. In the discrete selection mode, the user inputs the target position by gazing at it, and then it automatically switches to the continuous velocity control mode, which moves the robotic arm end-effector to the target position according to the user’s velocity command.

The goal of WiMi‘s technology is to enable accurate intent perception, efficient motion control and human-computer interaction. The underlying logic of this technology combines several technical components and algorithms to ensure stability, reliability and performance optimization of the control system. In the underlying logic, eye trackers and EEG signal recording devices play a key role in providing a sense of the user’s intent and attention by monitoring the user’s gaze point and EEG signals in real time. The eye-tracker tracks the user’s eye movements and determines the user’s gaze point and direction of view. EEG signal recording devices record the user’s EEG activity and extract features related to intent and attention through signal processing and analysis algorithms.

Data processing and algorithms based on eye movement data and EEG signals need to be processed and decoded in real time in order to extract the user’s intent and attentional indications. This involves the use of techniques such as machine learning, pattern recognition and signal processing to recognize and decode the user’s intent and attentional state.

Environment sensing and obstacle avoidance is an important component. The technology utilizes sensors to sense the surrounding environment and the location of obstacles. The environment sensing data and algorithms enable real-time planning of safe paths and collision avoidance, and combine this information with user commands to ensure the safety and accuracy of the robotic arm during movement. The shared controller fuses user commands and robot autonomy commands to form new control commands that are used to precisely control the motion of the end-effector. The actuation system converts the control commands into the actual movement of the robotic arm to achieve accurate position control and gripping action. This requires the collaborative work of motion control algorithms, motion planning, and actuation control strategies to achieve precise communication of user intent and accurate task completion.

The visual feedback interface provides intuitive user interaction and feedback. The GUI displays a real-time scene of the working area of the robotic arm, presenting information such as the target position, obstacles, and the status of the robotic arm, enabling the user to intuitively understand the operation of the system. Meanwhile, augmented reality technology can provide enhanced visual feedback, such as the display of virtual rectangles and directional recognition of target objects, to further improve the accuracy and efficiency of operation. Through the visual feedback interface, users can monitor the robot’s motion status, the position of the target object, and the system’s response in real time to better understand and control the system’s behavior.

SOURCE: PRNewswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img