I. Pose perception
Pose perception is achieved through the combined use of non-contact position sensors and attitude sensors to track changes in the position and spatial orientation of an object.

2. Flexible Detection
Currently, a variety of sensors are widely employed in many intelligent sensing devices. Its applications have permeated areas such as industrial production, ocean exploration, environmental protection, medical diagnosis, bioengineering, space development and smart homes.
As the demands of the information age increase, expectations regarding performance parameters such as the scope, accuracy and stability of measured information gradually increase.
This has presented new challenges for standard sensors, especially in terms of gas, pressure and humidity measurement requirements in special environments and signals.
In response to the increasing number of signals and special environments, new sensor technologies have developed in the following trends: the development of new materials, new processes and innovative sensors; performing sensor integration and intelligence; miniaturization of sensor technology hardware systems and components; and the integration of sensors with other disciplines.
At the same time, there is a desire for sensors with transparency, flexibility, extensibility, the ability to freely bend or even fold, portability and usability. With the advancement of flexible substrate materials, flexible sensors have emerged that meet all of these trending characteristics.
1) Characteristics of flexible sensors
Flexible materials, in contrast to rigid ones, typically exhibit properties such as softness, low modulus, and ease of deformation. Common flexible materials include polyvinyl alcohol (PVA), polyester (PET), polyimide (PI), polyethylene naphthalate (PEN), paper sheets, and textile materials.
Flexible sensors are those made from these flexible materials, offering excellent flexibility, extensibility, and even the ability to bend or bend freely.
With various structural designs, they can be arranged as needed depending on measurement conditions, facilitating convenient inspection of complex subjects.
These new flexible sensors are widely used in various fields such as electronic skin, healthcare, electronics, electrical engineering, sports equipment, textiles, aerospace and environmental monitoring.
2) Classification of Flexible Sensors
Flexible sensors are diverse, with various categorization methods. Classified by use, flexible sensors include pressure sensors, gas sensors (for alcohol detection), humidity sensors (for weather forecasting), temperature sensors (such as thermometers), voltage sensors, magnetoresistive sensors, and flow sensors thermal (for refrigerators).
Classified by sensing mechanism, flexible sensors include resistive, capacitive, magnetopressive, and inductive types.
3) Common Flexible Sensors
(1) Flexible gas sensors
Flexible gas sensors utilize gas-sensitive thin-film materials arranged on the electrode surface, with a flexible substrate.
They are characterized by lightness, flexibility, ability to bend easily and potential for large-scale production. Thin-film materials are known for their high sensitivity and relatively simple manufacturing process, attracting significant attention.
This fully meets the portability and low power requirements of gas sensors in special environments, overcoming the traditional limitations of gas sensors such as lack of portability, incomplete measurement range, small scale and high cost. They can perform simple and accurate detection of NH, NO and ethanol gases, thus attracting widespread attention.
(2) Flexible pressure sensors
Flexible pressure sensors are widely used in areas such as smart clothing, smart sports and robotic “skin”.
Polyvinylidene fluoride, silicone rubber and polyimide, used as base materials, have been widely employed in the manufacture of flexible pressure sensors.
These materials distinguish themselves from force sensors that use metal strain gauges and from common diffusion pressure sensors that use n-type semiconductor chips by offering superior flexibility, conductivity, and piezoresistive characteristics. (Figure 2)

(3) Flexible humidity sensor
Humidity sensors mainly consist of two types: resistive and capacitive. Hygrometers, characterized by a moisture-sensitive layer coated on the substrate, experience changes in resistance and resistivity as water vapor in the air is absorbed into the moisture-sensitive film.
This property can be used to measure humidity. Hygroscopic capacitors are generally made from polymer films, with common materials including polystyrene, polyimide and cellulose acetate butyrate.
Humidity sensors are rapidly evolving from simple hygroscopic components to integrated, intelligent, multi-parameter sensing devices. Traditional dry and wet bulb hygrometers or capillary hygrometers are no longer able to meet the needs of modern science.
Flexible humidity sensors, due to their low cost, low power consumption, ease of fabrication and integration into smart systems, have been widely researched.
The base material for making these flexible humidity sensors is similar to other flexible sensors, and there are many methods for making the moisture-sensitive film, including dip coating, spin coating, screen printing and inkjet printing.
The flexible sensor structures are versatile and can be arranged to meet the requirements of measurement conditions. They can measure special environments and signals conveniently and accurately, solving the problems of miniaturization, integration and intelligent sensor development.
These new flexible sensors play a crucial role in electronic skin, biomedicine, wearable electronics and the aerospace industry. However, the current level of technology for preparing materials such as carbon nanotubes and graphene for flexible sensors is immature and questions regarding cost, application range and lifetime persist.
Ordinary flexible substrates are not heat resistant, which leads to high tension and poor adhesion between the flexible substrate and the film material. Techniques for assembling, organizing, integrating, and packaging flexible sensors also need further improvement.
4) Common Materials for Flexible Sensors

(1) Flexible substrates
To meet the needs of flexible electronic devices, properties such as lightness, transparency, flexibility, elasticity, insulation and corrosion resistance have become key indicators for flexible substrates.
Among the many flexible substrate options, polydimethylsiloxane (PDMS) has become the first choice. Its advantages include easy availability, stable chemical properties, transparency and good thermal stability.
Especially, its property of having distinct adhesive and non-adhesive areas under ultraviolet light facilitates the adhesion of electronic materials to its surface.
Many flexible electronic devices achieve significant flexibility by reducing substrate thickness; however, this method is limited to nearly flat substrate surfaces. In contrast, stretchable electronic devices can completely adhere to complex and irregular surfaces.
Currently, there are generally two strategies to achieve the elasticity of wearable sensors.
The first method is to directly adhere thin conductive materials with low Young's modulus to the flexible substrate; The second method is to use inherently stretchable conductors to assemble devices, usually prepared by mixing conductive materials into an elastic base.
(2) Metallic Materials
Typically comprising conductive materials such as gold, silver and copper, metallic materials are mainly used for electrodes and conductors.
In modern printing processes, conductive materials often employ conductive nanoinks, including nanoparticles and nanowires. In addition to excellent conductivity, metallic nanoparticles can be sintered into thin films or wires.
(3) Inorganic Semiconductor Materials
Represented by ZnO and ZnS, inorganic semiconductor materials present broad application prospects in the field of wearable flexible electronic sensors due to their excellent piezoelectric properties.
(4) Organic Materials
Large-scale pressure sensor arrays are crucial for the future development of wearable sensors. Pressure sensors based on piezoresistive and capacitive signal mechanisms suffer from signal crosstalk, leading to inaccurate measurements.
This issue represents one of the biggest challenges in the advancement of wearable sensors. The use of transistors offers a solution to reduce signal crosstalk.
Consequently, many studies in the area of wearable sensors and artificial intelligence focus on how to achieve flexible pressure-sensitive transistors on a large scale.
5) Application of Flexible Sensors
Flexible electronics covers many fields, including the flexible foldable phone launched by Huawei, which employs flexible electronic technology.
Typically, flexible electronics are manufactured from a mixture of organic and inorganic materials, exhibiting excellent flexibility. Flexible sensors, made from flexible materials, exhibit impressive environmental adaptability.
As the Internet of Things and artificial intelligence evolve, many flexible sensors are characterized by their high integration and intelligent features.
The advantages of flexible sensors present promising application prospects, including in medical electronics, environmental monitoring and wearables.
For example, in the field of environmental monitoring, scientists can place flexible sensors on devices to monitor the intensity of typhoons and storms.
In terms of wearables, flexible electronics are best suited for testing skin-related parameters given the non-flat nature of the human body.
Flexible pressure sensors are widely used in smart clothing, smart sports and robotic “skin”. Polyvinylidene fluoride, silicone rubber and polyimide, which serve as base materials, have been widely applied in the manufacture of flexible pressure sensors.
These materials differ from force sensors that use metal strain gauges and general pressure sensors that use n-type semiconductor chips by featuring superior flexibility, conductivity, and piezoresistive properties.
Jianping Yu and his team proposed a new set of flexible, three-dimensional capacitive tactile sensors capable of simultaneously measuring pressure and shear force.
With the inductive electrode layer based on flexible printed circuit boards (FPCB) and the floating electrode layer based on polydimethylsiloxane (PDMS), the fragile interface circuit is processed into the inductive electrode layer at the bottom, significantly increasing the flexural rigidity of the sensor set.
The conductive mesh fabric formed by coating carbon-based conductive composite materials on mesh fabric, developed by Weijing Yi and his team, exhibits pronounced piezoresistive performance.
The pressure and resistance relationship of this conductive mesh fabric within the pressure range has a good linear relationship and excellent repeatability.
This fabric can be used to measure pressure in smart clothing, flexible mannequins and more, making it important for wearable device research. The floating gate memory, fabricated using PEN as the flexible substrate and organic materials as the conductive layer, has excellent performance, and the resulting flexible pressure sensing array also has high resolution.
SOHM and others have created flexible pressure sensors by embedding PDMS electrode layers in vertically aligned carbon nanotube arrays, which can simulate tactile sensing functions and be used for robotic “skin” research.
3. Part perception and identification

Part identification is an essential step in industrial manufacturing. The main objective is to discern whether the parts or blanks being fed into the machine tools for processing are in fact the intended parts or blanks, as well as identifying their current position information.
In small-scale operations or industries with low automation requirements, this part detection and identification can be performed manually.
However, in large-scale industrial manufacturing or flexible automated manufacturing systems, numerous different parts are automatically routed to various processing devices within the system, necessitating automatic detection and identification.
Combining computer vision and artificial intelligence for automatic part identification and detection is a key area of current research.
According to statistics, more than 80% of the information humans process comes from visual inputs, making visual sensors advantageous in several ways for acquiring information about the workspace and workpiece:
(1) Even after discarding a significant portion of the visual data, the remaining information about the surrounding environment is often more abundant and accurate than that provided by LIDAR or ultrasonic sensors.
(2) LIDAR and ultrasonic sensors operate by actively emitting pulses and receiving reflected pulses for distance measurement. Therefore, when multiple parts are present simultaneously on a bench, there may be interference between them. However, this problem does not exist with visual measurements, which are passive.
(3) The sampling period for data from LIDAR and ultrasonic sensors is generally longer than that from cameras, making them less efficient in providing information to high-speed robots. On the contrary, visual sensors offer considerably faster sampling rates.
Of course, visual sensors have their drawbacks, such as being less effective than active sensors such as millimeter-wave radar during foggy conditions, direct sunlight, and at night.
Active sensors can directly measure parameters such as distance and speed of a target, while visual sensors require computation to obtain them.
However, in structured environments such as laboratories and automated production workshops, the dual advantages of visual sensors in terms of information capacity and collection speed will undoubtedly play a crucial role in the development of automatic part detection and recognition.
With the continuous improvement of computer performance and the rapid development and improvement of computer vision technology, the use of computers to recognize targets in images has become an important point of research.
Furthermore, the widespread adoption of high-speed hardware implementation methods has enabled real-time image recognition technology to be better applied in practice.
Therefore, the use of computer vision combined with artificial intelligence to achieve automatic detection and recognition of workpieces has significant practical significance.
The initial part inspection and identification phase was mainly based on manual methods. However, with the continued acceleration of online speeds and increasing demands for part inspection and identification, manual methods have become increasingly unsuitable for industrial requirements.
This has led to the emergence of numerous innovative technologies to meet workpiece inspection and identification needs, such as eddy current detection, infrared inspection, ultrasonic testing, radiographic testing, holographic inspection, and machine vision inspection technologies.
These technologies have given new vitality to part inspection and identification, significantly increasing the level of automation.
Among these emerging technologies, machine vision system has gained the most widespread application due to its ability to acquire abundant and accurate information.
For example, visual assistance in robot assembly can identify component dimensions and shapes to ensure assembly accuracy and quality control.
Furthermore, based on information recognized by vision, products can be loaded and unloaded through automated logistics systems.
This allows identification of fast-moving parts, determination of an object's position and orientation relative to coordinates, completion of object positioning and categorization, recognition of the object's positional distance and attitude angle, extraction of features from prescribed parameters, and detection of errors.
Currently, part identification predominantly employs traditional camera-based calibration methods.
From the perspective of computational thinking, traditional camera calibration methods can be categorized into four types: calibration methods using optimization algorithms, methods using the camera transformation matrix, the two-step method considering distortion compensation and the dual-plane calibration method employing a more rational camera imaging model method.
Based on the characteristics of solution algorithms, these methods can also be divided into direct nonlinear minimization methods (iterative methods), closed-form solution methods and two-step methods.
(1) Calibration method using optimization algorithm
These types of camera calibration methods assume a highly complex optical imaging model. They incorporate various factors into the imaging process and obtain camera model parameters by solving linear equations.
However, this method completely ignores the non-linear distortion in the camera process. To improve calibration accuracy, the application of nonlinear optimization algorithms is inevitable.
This method has two main disadvantages: first, the camera calibration result depends on the initial value given to the camera.
If the initial value is inappropriate, it will be difficult to obtain the correct calibration result through the optimization program. Secondly, the optimization process is time-consuming and cannot produce real-time calibration results.
Dainis and Juberts proposed a method that uses direct linear transformation and introduces nonlinear distortion factors for camera calibration. Their system was designed to accurately measure a robot's trajectory.
The system can measure the robot's trajectory in real time, but does not require the calibration algorithm to provide real-time calibration for the system.
(2) Using the camera transformation matrix calibration method
Traditional methods in photogrammetry suggest that the equation describing the relationship between the three-dimensional spatial coordinate system and the two-dimensional image coordinate system is generally a nonlinear equation of the camera's internal and external parameters.
If we neglect the nonlinear distortion of the camera lens and treat the elements in the perspective transformation matrix as unknowns, a set of three-dimensional control points and corresponding image points can be used to solve each element in the perspective transformation matrix through a linear equation. method.
The advantage of this type of calibration method is that it does not require the use of optimization methods to resolve camera parameters, thus allowing faster calculation and real-time calculation of camera parameters.
However, there are still some shortcomings: Firstly, the calibration process does not consider the non-linear distortion of the camera lens, affecting the calibration accuracy.
Second, the number of unknown parameters in the linear equation exceeds the number of independent camera model parameters to be solved, which means that the unknowns in the linear equation are not mutually independent.
This overparameterization problem means that in situations where the image contains noise, the solution to the unknowns in the linear equation may fit the set of linear equations well, but the parameters derived from this may not necessarily align well with the real situation.
The camera calibration method using the perspective transformation matrix has been widely applied in real systems, obtaining satisfactory results.
(3) Two-step method
The idea of this calibration method is to first use the direct linear transformation method or perspective transformation matrix method to solve the camera parameters.
Then, using the obtained parameters as initial values, distortion factors are considered and optimization algorithms are used to further improve the calibration accuracy.