Application of machine learning and computer vision methods to determine the size of NPP equipment elements in difficult measurement conditions

. The research relevance is determined by the need to improve the processes of measurement of objects size in hard-to-reach conditions. In the modern industrial environment, where high measurement accuracy is critical for ensuring safety and maximizing the efficiency of production processes, the study of this topic is relevant in the context of rapid technological development and increased requirements for production quality. The study aims to evaluate the possibilities of using modern computer vision methods for measuring and reconstructing objects in difficult technical conditions, such as the enclosure of a water-water power reactor. The study employed 3D photogrammetry methods, including Depth from Stereo and Multi-View Stereo, as well as Structure from Motion methods. The study determined that modern computer vision methods, in particular machine learning methods, can be successfully used for measuring and reconstructing objects in hard-to-reach conditions. The study showed that the measurement accuracy can reach values close to 1 mm under ideal conditions and at a distance of 1.5 from the measuring device to the object. At the same time, the Multi-View Stereo method revealed greater uniformity of the spatial distribution of errors compared to the Depth from the Stereo method. In practice, in the conditions of real photos, the Multi-View Stereo method turned out to be more demanding to accurately determine the position of the camera. Due to its low demand for the exact coordinates of the cameras, the Depth from the Stereo method showed better results, showing less error in the measurements. The study highlighted the possibility of using the proposed method to distinguish fluctuations in the height of the surface of


INTRODUCTION
The research on the machine learning and computer vision methods for determining the dimensions of Nuclear Power Plant (NPP) equipment elements under difficult measurement conditions is important for the development of new methods and technologies in the field of computer vision and automated inspection, which can be used in various industries, including nuclear power and industrial automation.The study of this topic is critically important in the context of the development of modern methods of computer vision and automated inspection.The need for accurate and reliable methods of surface measurement in difficult conditions of high radiation background becomes especially relevant in nuclear power, where safety and reliability are crucial.The use of automated systems based on machine learning can reduce human involvement in the measurement process, which can lead to increased speed and efficiency of research.Accurate and efficient measurement of equipment dimensions is an important step in ensuring the safety and reliability of nuclear power plants.The use of advanced technologies can help avoid malfunctions and ensure optimal efficiency.The problem with the study is that there is currently a lack of research in the development and improvement of computer vision and image analysis methods for precision measurement of objects in difficult-to-access environments.In particular, there are problems related to the accuracy and reliability of measurements, especially in conditions of high radiation background.In such conditions, traditional methods may become unsuitable or unsafe for use, and this determines the relevance of the study to develop new, more efficient, and safer measurement approaches in this area.
With the innovative development of technology, a range of new information technologies for image processing is emerging, which opens up new opportunities for further research.A.O. Zelinskyi & V.V. Lisovskyi (2023) are confident that there are many areas for their application, but many of them face the problem of using resources to fully realise their potential.Thus, the development of a system based on a cloud architecture for machine image analysis using modern analytical processing technologies is a relevant area of research.It is not specified how resources affect the full realisation of the potential of image-processing information technologies.It is important to consider economic and technical limitations.
According to T.V. Mosagutova (2021) and V. Levchenko et al. (2021), the long-term operation of nuclear power plants is largely determined by the choice of the right structural materials.The development of new technologies for the selection of materials and equipment in the energy the object, which is important for further applications in the field of reactor maintenance and other areas of industry.The practical value of this research lies in the development and validation of methods for measuring and reconstructing objects in conditions where traditional methods become limited or impractical Keywords: water-water power type reactor; neutron fields; physical properties; binocular system; machine learning Application of machine learning and computer vision methods... compare two computer vision methods, namely Depth from Stereo (DFS) and Multi-View Stereo (MVS), for precision measurement of the reactor vessel surface in high radiation background conditions.

MATERIALS AND METHODS
The study considered the ВВЕР-1000 water-water power reactor (WWPR), namely the problem associated with the swelling of the reactor's internal lining.The subject of the experimental study was a water-water reactor baffle made of stainless steel 08X18H10T.The reconstructed surface was compared with a reference three-dimensional model of the vessel.
The following neural network architectures were selected in the study: transMVSnet for the MVS approach (Ding et al., 2021), and CREstereo for the DFS approach (Li et al., 2022).These neural networks were chosen due to their high efficiency, which they demonstrated during testing on the Middlebury Benchmark (a system for independent testing of various algorithms and neural network architectures) among open-source neural networks.The transMVSnet architecture was chosen because of its use of transformers, which are now showing an advantage over traditional convolutional neural networks, especially in computer vision tasks.This architecture should ensure the stability of the neural network even with low image quality.
There are differences between MVS and DFS that need to be considered.For MVS, it is necessary to accurately determine the position of the cameras for high-quality results, and this can be achieved by placing the camera on a manipulator that can be controlled by encoders and/or stepper motors.However, this poses an engineering challenge, as the long distance from the arm attachment point to the baffle can make it very difficult to design the arm for accurate positioning.As an alternative, the structure from motion (SFM) method is used, which determines the position of the cameras based on the images received.The application of SFM to synthetic data generated in Blender showed positioning accuracy of several centimetres.
For the DFS method, it is critical to accurately calibrate the camera, as the slightest distortion of the camera image will distort the results.Although DFS allows the depth map to be constructed without knowing the camera positions (a stereo camera is used where the relative positions of the cameras are known), the camera position is still required to determine the degree of deformation of the baffle by comparing it to the reference model.Thus, it was important to determine the position of the cameras.This structure was chosen to test neural networks under ideal conditions, where there are no distortion artefacts on the cameras and the exact parameters of their location are known.
The virtual environment for the research was created using Blender software.It included a simplified model of a baffle with no internal channels and Physically Based Rendering (PBR) technology was used for texturing.The texture included a texture layer, a normal map, a height map, and a roughness map.The texture layer was taken from an open source and applied only to the inner surface of the baffle using the UV mapping method.Then, using the nodes UI interface of the shading tab, the material for the surface was configured.
The 4D Light Field Benchmark addon was used to simulate the movement and configure parameters of the virtual camera.The trajectory of the camera was determined using key points.The Cycles (physically based production renderer) graphics engine was used to generate photos, which uses a path-tracking method for realistic lighting simulation (Choi et al., 2021).The add-on saved the camera images to separate files, including a depth map in PFM format and a configuration file (config.cfg)containing the camera parameters and its spatial position.This data can be used to train the neural network from scratch or to finetune its parameters.The frames and configuration parameters extracted from the source file are converted by using a script, part of which was applied according to (Mеng & Gräbe, 2021), to convert them into a format compatible with the transMVSnet neural network.
The images were processed based on the parameters specified in the bash script, such as resolution, number of images in the batch.The paths to the input data, the fusion method (merging separate point clouds into one large one), the path to the output file, and the path to the Python scripts were specified there either.When processing 50 images in a resolution of 960×704 using a GTX1070 graphics card (manufactured by NVidia, USA), the process took about 10 minutes.The output was a ready-made point cloud in .plyformat.
The following methods and approaches were used in the study, namely the Structured Light Method.This method was used for the modelled environment in the Blender software.With the help of structured light, a texture was created, which was used for accurate surface measurements.This method provided accurate height and depth data.For real photographs, the photogrammetry method was used, which is based on the analysis of photographs from different cameras to measure and reconstruct 3D objects.The cameras were pre-calibrated and rectified to obtain accurate results.For real-world photos, a stereo imaging system was used to obtain duplicate images from different cameras.These images were processed to determine the depth and structure of the objects.The research involved the use of neural networks for image analysis and processing.Neural networks were trained on large training datasets and used to solve 3D object reconstruction tasks.Python scripts and bash scripts were used to process the results.These scripts helped to process, analyse, and visualise the data obtained from cameras and neural networks.
These methods were used to perform both virtual and real measurements of surfaces and objects, compare the results and determine their accuracy and efficiency in different conditions.The distance to the object in the study was calculated using formula (1): where l -distance to a point, b -distance between cameras, Δd -offset between identical pixels, fp -focal length of camera.
The synthesis method was used to create models of the virtual environment in Blender.The results obtained in the study were compared with a reference three-dimensional model of the enclosure.

RESULTS
This study addresses the problem associated with the swelling of the reactor baffle of the Water-Water Power Reactor (WWER-1000).Specifically, the object of study is the effect of powerful neutron fields on the physical properties of stainless steel used for the manufacture of the reactor pressure vessel.These physical properties, including stiffness, ductility, density, and volume, are subject to significant changes under the influence of neutron radiation intensity.This phenomenon is quite dangerous in the context of nuclear reactor operation, since changes in the physical properties of the material can lead to disruption of the normal circulation of coolant in the first reactor circuit or even destruction of fuel assemblies in case of fuel reload.The transformation of material due to neutron irradiation is a complex process, as it depends on several variables in the environment, including temperature, pressure, and energy spectrum of emitting neutrons.Important aspects of this study include the uncertainty of mathematical modelling of material swelling under the influence of neutron fields since existing mathematical models do not always reliably predict material behaviour in VVER-1000 reactor conditions (Qian & Liu, 2023).
Considering the aforementioned circumstances, solving this problem requires regular monitoring of the physical properties of the inner surface of the reactor vessel and further study of the influence of various factors on the material swelling process (Pylypchynets, 2022).These efforts are important for ensuring the safety and efficient operation of VVER-1000 nuclear reactors.This study describes an innovative method of applying 3D photogrammetry for non-contact inspection of the baffle surface.The application of this method involves the use of approaches based on MVS and DFS to obtain depth maps and 3D point clouds (Morgan et al., 2022).
The proposed approach introduces the possibility of measuring the surface of the reactor baffle without the need for physical contact with it.The use of 3D photogrammetry techniques in combination with MVS and DFS methods allows obtaining accurate and objective data on the depth and structure of the object's surface.This approach is particularly useful when monitoring is to be carried out on a shutdown reactor during maintenance when high radiation conditions make traditional measurement methods difficult.The DFS method is based on a binocular system that includes two cameras placed at a certain distance from each other (base).This method is used to determine the distance to an object using the information obtained from both cameras.The basic principle of the method is to identify the same pixels in the images obtained from the right and left cameras and determine the displacement of these pixels.The distance to the object is calculated using a simple formula (1).
Simple computer vision algorithms exist to determine pixel shifts in images, such as the Boyer-Moore algorithm (Faqih et al., 2022).These algorithms work effectively under relatively simple conditions, such as a flat surface that is properly illuminated, no camera lens distortion, and accurate stereo pair rectification.However, they demonstrate limited effectiveness when processing tilted surfaces and scenes that contain a lot of detail.Machine learning methods have undergone significant transformations in recent years, particularly in the field of computer vision.Neural networks trained on large data sets have shown significant improvements in solving computer vision problems.The use of neural networks is of great advantage, but this advantage is only manifested if the network has been trained on large training data sets.Typically, neural networks perform best when the training and test data are of similar origin.However, recent years have seen the development of new neural network architectures that perform well even when the data have different origins.For example, networks trained on synthetic data perform well when applied to real data.Typically, such architectures are based on transformers.However, their main disadvantage is the large number of weighting factors, which requires special approaches to the training process.One of the important differences between algorithms and neural networks is the high resource intensity of the latter.This feature has led to the excellent use of algorithms in robotics, especially in conditions where it is necessary to operate with limited computing resources.However, with the advent of microcomputers equipped with graphics accelerators and supporting parallel computing, it has become possible to use complex neural networks in real-time in robotics.
Unlike DFS, MVS is not self-sufficient and requires accurate information about the relative position of the cameras in a series of measurements to build a depth map.The MVS method is similar to DFS, namely, analysing the source images, identifying the same details, triangulating them, and building a depth map.However, the MVS method has its advantages, including the use of only one camera and the ability to correct occlusions (dead zones) using a series of images, unlike DFS, which uses only two rectified images.Depth maps obtained by DFS or MVS and known camera parameters can be used to build three-dimensional point clouds and further reconstruct a three-dimensional scene in the form of a mesh.For this purpose, ready-made implementations exist, such as in S. Galliani et al. (2015) and C. Griwodz et al. (2021), which can be used to reconstruct a scene using the MVS method.They are characterised by open source and modular structure, which allows the use of individual components of these programs to create the final three-dimensional scene in combination with other neural networks.
The DFS method requires fewer computing resources and uses only two cameras, so it is suitable for real-time surface research, provided that the appropriate computing hardware is available.The general idea is to 3D reconstruct the visible surface of a piece of equipment from a set of images using machine learning methods.In the case of NPP applications, it is assumed that a radiation-resistant camera is lowered into the area of the shaft in a shutdown reactor with fuel elements removed and photographic data of the selected part of the surface under study is collected from different angles.The water-water reactor baffle is made of stainless steel 08Kh18N10T, so it is possible to reflect a significant part of the light flux energy.Reconstructing reflective surfaces is an extremely difficult task for traditional MVS systems, and new methods such as NeRf or Gaussian splatting have come close to solving this problem, although these methods were created to generate new views from existing ones, research is underway to extract the mesh from the reconstructed scene, the results of which can be seen in projects such as dreamgausian.
The swelling of the baffle metal under the influence of neutron radiation is a long process.During this time, the baffle is covered with oxides and deposits, which causes it to lose its reflectivity, become cloudy and develop textural irregularities that allow the convolutional neural network to better distinguish features and thus improve the accuracy of depth map prediction.Lighting is also an important consideration.As shown by the Blender environment simulation, a point light source near the camera is highly undesirable as it causes flares in the images.The best option is an extended light source placed perpendicular to the camera normal plane.Noise in the image caused by the interaction of ionising radiation with the camera's light-sensitive sensor should also be considered.Moreover, broken pixels can quickly appear.It is also possible that the optics may become cloudy due to high-powered radiation.All of this requires scanning the baffle in a short period (on the order of minutes).
After obtaining satisfactory quality images of the surface, the surface is reconstructed using DFS or MVS algorithms based on neural networks.The resulting reconstructed surface is compared to a reference three-dimensional model of the baffle.For this purpose, an alignment procedure is first performed, which involves the alignment of the model and the resulting point cloud in three-dimensional space.The part of the reactor shaft visible above the baffle from the middle of the core can be used as an undeformed surface against which the alignment is performed.Alignment parameters, such as displacement, rotation, and scale, are selected by approximation methods until the error function reaches an extremum.This procedure requires the definition of seven degrees of freedom.There are other possible variants of the alignment procedure, which will be discussed later.
After the alignment, it is possible to study the deviations of the cloud points from the reference model and estimate the surface deformation.The photographs were taken with a real camera using a lens with minimal distortion (almost zero distortion).The camera was previously calibrated and rectified to create a stereo pair.For the MVS method, the images from the left camera of the stereo pair were used.The photos were taken in bright light during the daytime to obtain high-quality images.The test object was a calibration plate with known geometric characteristics.
To determine the camera positions, the Structure from Motion (SfM) method was used, which was implemented using the COLMAP software package (Schonberger & Frahm, 2016).After that, the point cloud was cropped to the area of the verification plate and a plane was constructed using the least squares method.This plane served as the ground truth for calculating measurement errors for both MVS and DFS approaches (Jin et al., 2023).
The main features of these studies are the absence of image distortion by the camera and the known absolute values of the camera positions, which makes it possible to compare the accuracy of the results for both methods.DFS approach: the results of measurements on a synthetic dataset using a virtual stereo camera for the specified parameters at a distance of 1.5 m from the baffle surface are shown below (Table 1).
To capture the scene, the camera moves in a circular path around a certain point on the surface of the enclosure at a distance of 1.5 m (Fig. 1-3).

FWHM: 1 mm
Source: compiled by the authors

D. Belytskyi et al.
To capture the picture, the camera moves in a circular path around a certain point on the surface of the enclosure at a distance of 1.5 m (Fig. 4) MVS approach.Research on real data: To verify the results, several images of the test plate were taken at a distance of 0.4 ± 0.1 m in 960×720 resolution (Fig. 5-7).Application of machine learning and computer vision methods...
An experimental measurement was carried out to thoroughly analyse the sensitivity of the plane distance method.To accomplish this, a computer numerically controlled machine (CNC) was used to fabricate a test sample, as shown in Figure 8.A given 150×150 mm plate was divided into 9 50×50 mm squares, each of which was recessed into the plate to a variable depth of 2.5 to 0.25 mm.Additionally, two flat surfaces were fitted on both sides of the 150×150 mm square, which were defined as reference surfaces.These surfaces were used to orientate the reconstructed model in space.The depth map shown in Figure 9 was constructed using the acquired images.Next, a point cloud was collected using the obtained depth map, similar to the previous experiments.The main goal was to manually measure the depth values in the centres of the 5×5 squares and compare them with the true depth values.To ensure the possibility of measurements, it was decided to place the reconstructed surface in the XY plane.This process is possible both with the help of the built-in tools of MeshLab and with the help of a Python script, as it was implemented in this case.
From the resulting point cloud, only those points that corresponded to zero surfaces were selected.These points were then approximated by a plane using the least squares method.Knowing the equations of the plane, the necessary spatial displacement and rotation transformations were found, which reflect the original point cloud in the XY plane.The obtained measurement results (GT -determined by experts, and DFS measurement results) were systematised and presented in Table 2.The data obtained were visualised in a bar chart shown in Figure 10.To determine the error due to measurement fluctuations, the following approach was used: the reference side surfaces were approximated by a plane using the least squares method.
The average deviation of the points of the side surfaces from this plane was taken as the value of the error caused by fluctuations.This value was 0.23 mm.The resulting value was displayed on the diagram in the form of an error bars.Application of machine learning and computer vision methods...
The graph shows that the depth of the steps determined by the DFS method generally does not exceed the error value compared to the true value.This indicates that the method can determine the difference between points with an accuracy of 0.5 mm, provided that these points are more than 5 cm apart in the horizontal plane.

DISCUSSION
The study results demonstrate the importance and potential of using modern computer vision systems and neural networks for measurements in difficult conditions, in particular, in conditions of high radiation background, as may occur in nuclear power.Y. et al. (2023) focus on the importance of assessing the condition of nuclear power plant (NPP) equipment and points out its significance for improving the safety and efficiency of NPPs.The paper is notable for offering a detailed literature review and highlighting issues that require further research in this area.The researchers emphasise the importance of using machine learning methods to assess the condition of NPP equipment, as this can help improve decisions on their operation and maintenance.The authors assure that this approach can help to detect anomalies in time, predict the remaining service life and diagnose faults.The study includes an analysis of the main types of failures, data sources, maintenance strategies, and their interrelationships.The study places a special emphasis on the use of deep learning methods to assess the condition of NPP equipment.It also highlights current challenges and future research directions in this area.This study is important for the community because it reviews current methods for assessing the condition of NPP equipment and indicates that there are significant opportunities for applying machine learning methods and improving maintenance in this area.Both studies have a common feature -the object of research in both is the use of modern technologies and methods to improve the accuracy and efficiency of measurements and assessments of the condition of objects.In both cases, the importance of choosing the right method and taking into account the specific conditions of measurement or assessment is emphasised.However, they differ in terms of the subject matter and methodology.The study described above is more focused on measuring and controlling surfaces in hard-to-reach environments.It looks at two methods (DFS and MVS) for this purpose and compares their performance.The results emphasise the importance of choosing the right method depending on the conditions and the need for accurate measurements.On the other hand, Y. Xu et al. (2023) focus on monitoring the condition of NPP equipment and use machine learning methods for this purpose.Both studies have practical applications in various technology areas, but their subjects and methods differ.The research reported in the results is more focused on surface measurement and control, while the researchers focus on monitoring the condition of NPP equipment using machine learning methods.
The study by Z. Sun et al. (2020) proposes a new approach to predicting downtime at nuclear power plants (NPPs) using computer vision and data-driven modelling.The main goal of their research is to automatically detect abnormal behaviour of people or teams during field operation processes and predict possible delays during NPP downtime.The researchers use computer vision algorithms to track people in a video to detect abnormal behaviour.They then create a simulation model of field operation and training processes based on the data.This model uses the anomalies detected in the video to adjust parameters and predict possible delays in the workflow.The results of their research show that delays in task completion often occur in the initial phase of the workflow, and waiting queues build up due to the over-allocation of resources during mid-stage maintenance.Modelling shows that tasks on the critical path are more sensitive to these anomalies and can lead to delays in the workflow.In both cases, the research uses modern technology to improve efficiency and accuracy in a variety of industries, but their goals and methods differ.The research described above focuses on surface measurement and control.It looks at two methods (DFS and MVS) for measurements and compares their performance.On the other hand, the study by Y. Xu et al. (2023) focuses on monitoring the condition of NPP equipment and uses machine learning methods for this purpose.The third study by Z. Sun et al. (2020) proposes a new approach to predicting outages at NPPs.All these studies are important and have the potential to improve the respective industries.
Y. Yao et al. (2020) propose a new approach for fault diagnosis in nuclear power plants (NPPs) using a full-scale state image-based simulator (FDFSSII).The main idea is to create a series of grey images that represent the transient process, including the normal state and the fault state, based on real-time monitoring data.Machine learning (ML) technology is used to extract and classify image features from the analysis of a large amount of historical and synthetic grey image data.The main steps include kernel principal component analysis (KPCA) and classification of image features using developed classifiers in different learning methods.The diagnostic effect is evaluated using the F1 score.The simulation results show that the FDFSSII approach is successfully used for fault diagnosis at NPPs.It simplifies the process of nuclear reactor operation due to a large amount of monitoring data and provides useful auxiliary information to operators.This approach can improve the safety and efficiency of NPPs by facilitating timely fault detection and diagnosis.Although both studies have a common factor in common -the use of modern technology to achieve their goals -their approaches and subjects differ.The study on computer vision systems focuses on improving the accuracy of measurements in hardto-reach environments by comparing the performance of two methods (DFS and MVS) in different conditions.On the other hand, the researchers diagnosed faults in nuclear power plants by using machine learning to analyse condition images and detect anomalies.
The researchers X.Zhong & H. Ban (2022) consider the problem of classification and diagnosis of faults in the nuclear industry in one of their studies.In their opinion, the main problem is that data on faults and accidents at nuclear power plants are usually limited or difficult to obtain.The researcher proposes a transfer learning method based on a convolutional neural network (CNN) to solve this problem.The method uses a two-level network.The first, shallower level of the network is based on a pre-trained CNN based on the ImageNet database and is designed to automatically extract features.The second, deeper layer of the network is configured for a specific classification problem.Using data on rotating machine faults, the proposed method only requires limited training data but achieves high accuracy.This means that the method can be an effective tool for fault diagnosis and classification in data-constrained environments at nuclear power plants, which can improve their safety and efficiency.Both studies are concerned with the use of advanced technologies to improve efficiency and safety in various industries.
The research described above focuses on the use of computer vision systems for precision measurements in hard-to-reach environments.Two methods (DFS and MVS) are investigated and their accuracy in virtual and real environments.The possibility of distinguishing surface height fluctuations is also investigated.The study by the researchers focuses on the use of convolutional neural networks for the classification and diagnosis of faults in nuclear power plants.The problem of limited fault data is investigated, and a transfer learning method is proposed that allows achieving high diagnostic accuracy with a minimum amount of data.Both studies use modern technologies, including neural networks and computer vision.These studies are helping to improve accuracy and safety in a variety of fields, such as surface measurement and nuclear power plant diagnostics.The research described above focuses on surface measurement and control, particularly in hard-toreach environments.The study by the researchers focuses on fault diagnosis in nuclear power plants and is used for accident classification and image-based anomaly detection.Both studies can make significant contributions in their respective fields and help to improve processes and safety.

CONCLUSIONS
In this study, the possibility of using modern computer vision systems for precision measurements in hard-to-reach environments was carefully considered.The results of this study provide an important contribution to the development of new methods of measurement and control of surfaces in various fields of science and technology.Experiments conducted in a simulated virtual environment show that the desired level of accuracy is achieved, which is on average 1 mm.It is noticeable that the DFS method shows a significant increase in error on surfaces located at an angle to the camera.In contrast, the MVS method shows a more homogeneous spatial distribution of error.This observation highlights the importance of selecting the appropriate method depending on the specific conditions and the need for accurate measurements.
In real-world conditions, the key limitation of the MVS method was the accurate determination of the camera position.Using the SFM method to determine the position of the camera at a distance of about 1.5 m from the surface allowed us to achieve an accuracy of positioning about 2 cm.On the other hand, the DFS method, which does not require precise camera parameters, demonstrated greater accuracy.The average error over the surface was 0.6 mm for DFS compared to 1.2 mm for MVS.Additionally, a study was conducted to investigate the ability of the proposed method to distinguish between surface height fluctuations.The results show that this approach is promising, as it can simplify the tasks of reactor maintenance and accurate measurements in conditions of high radiation background.Such possibilities open up a wide range of possible applications of this method in various fields of science and technology.
Based on the results of this study, two possible areas for further research can be suggested.Firstly, improving MVS and DFS algorithms can increase their accuracy and speed.Research into new neural network architectures that would be more efficient for 3D reconstruction tasks is one of the possible directions.Secondly, further research is required to investigate the impact of physical conditions on the accuracy and reliability of the methods.Radiation background, lighting and other environmental parameters can affect the results.It is important to determine how these factors affect measurements and how they can be corrected.

Figure 1 .Figure 2 .
Figure 1.Error distribution histogram Source: compiled by the authors

Figure 3 .
Figure 3. Two-dimensional spatial distribution of error Source: compiled by the authors

Figure 4 .Figure 5 .Figure 6 .
Figure 4. One-dimensional spatial distribution of the error Source: compiled by the authors

Figure 7 .
Figure 7. Two-dimensional spatial distribution of error Source: compiled by the authors

Figure 8 .Figure 9 .
Figure 8. Test sample Source: compiled by the authors

Figure 10 .
Figure 10.Measurement results Source: compiled by the authors

Table 2 .
Measurement results